Weekly Petroleum Status Report
Gasoline and Diesel Fuel Update (EIA)
Table 12. Spot Prices of Ultra-Low Sulfur Diesel Fuel, Kerosene-Type Jet Fuel, and Propane... 1.549 1.389 1.395 1.391 1.326 1.082 Propane Mont Belvieu ......
STATEMENT OF MELANIE KENDERDINE DIRECTOR OF THE OFFICE OF ENERGY...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
2.97gal above the price at Mont Belvieu. This differential sent a strong signal to producers and distributors, and market participants responded by moving additional supplies...
Chambers County, Texas: Energy Resources | Open Energy Information
County, Texas Reliant Baytown Biomass Facility Places in Chambers County, Texas Anahuac, Texas Baytown, Texas Beach City, Texas Cove, Texas Mont Belvieu, Texas Old...
Vilim, R.B.
1985-08-01
The principle methods for performing reactor hot spot analysis are reviewed and examined for potential use in the Applied Physics Division. The semistatistical horizontal method is recommended for future work and is now available as an option in the SE2-ANL core thermal hydraulic code. The semistatistical horizontal method is applied to a small LMR to illustrate the calculation of cladding midwall and fuel centerline hot spot temperatures. The example includes a listing of uncertainties, estimates for their magnitudes, computation of hot spot subfactor values and calculation of two sigma temperatures. A review of the uncertainties that affect liquid metal fast reactors is also presented. It was found that hot spot subfactor magnitudes are strongly dependent on the reactor design and therefore reactor specific details must be carefully studied. 13 refs., 1 fig., 5 tabs.
Energy Science and Technology Software Center (OSTI)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Energy Science and Technology Software Center (OSTI)
2014-01-01
In automotive industry, destructive inspection of spot welds is still the mandatory quality assurance method due to the lack of efficient non-destructive evaluation (NDE) tools. However, it is costly and time-consuming. Recently at ORNL, a new NDE prototype system for spot weld inspection using infrared (IR) thermography has been developed to address this problem. This software contains all the key functions that ensure the NDE system to work properly: system input/output control, image acquisition, datamore » analysis, weld quality database generation and weld quality prediction, etc.« less
Plasmonic electromagnetic hot spots temporally addressed by photoinduced molecular displacement.
Juan, M. L.; Plain, J.; Bachelot, R.; Vial, A.; Royer, P.; Gray, S. K.; Montgomery, J. M.; Wiederrecht, G. P.; Univ. de Technologie de Troyes
2009-04-23
We report the observation of temporally varying electromagnetic hot spots in plasmonic nanostructures. Changes in the field amplitude, position, and spatial features are induced by embedding plasmonic silver nanorods in the photoresponsive azo-polymer. This polymer undergoes cis?trans isomerization and wormlike transport within resonant optical fields, producing a time-varying local dielectric environment that alters the locations where electromagnetic hot spots are produced. Finite-difference time-domain and Monte Carlo simulations that model the induced field and corresponding material response are presented to aid in the interpretation of the experimental results. Evidence for propagating plasmons induced at the ends of the rods is also presented.
ARM - Datastreams - aosaeth2spot
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
carbon concentration for spot 2 uncorrected for loading factors ngm3 equivalentblackcarbonspot2uncorrected ( time, wavelength ) Aethalometer instrument flags unitless...
Sensor Placement + Optimization Software (SPOT) | Open Energy...
modeling tools User Interface: Spreadsheet Website: www.archenergy.comSPOT Cost: Free Language: English References: http:www.archenergy.comSPOT SPOT(tm) is intended to...
Energy Science and Technology Software Center (OSTI)
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
U.S. Energy Information Administration (EIA)
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
recent price increase may be due to the nearing phase 1 completion of the Targa Galena Park Terminal on the Houston Ship Channel near Mont Belvieu. The terminal's nameplate export...
New construction era reflected in East Texas LPG pipeline
Mittler, T.J. )
1990-04-02
Installation of 240 miles of 6, 10, and 12-in. LPG pipelines from Mont Belvieu to Tyler, Tex., has provided greater feedstock-supply flexibility to a petrochemical plant in Longview, Tex. The project, which took place over 18 months, included tie-ins with metering at four Mont Belvieu suppliers. The new 10 and 12-in. pipelines now transport propane while the new and existing parts of a 6-in. pipeline transport propylene.
Hot Spot | Open Energy Information
definitions:Wikipedia Reegle Tectonic Settings List of tectonic settings known to host modern geothermal systems: Extensional Tectonics Subduction Zone Rift Zone Hot Spot...
SPOT Suite Transforms Beamline Science
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
SPOT Suite Transforms Beamline Science SPOT Suite Transforms Beamline Science SPOT Suite brings advanced algorithms, high performance computing and data management to the masses August 18, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov als.jpg Advanced Light Source (ALS) at Berkeley Lab (Photo by Roy Kaltschmidt) Some mysteries of science can only be explained on a nanometer scale -even smaller than a single strand of human DNA, which is about 2.5 nanometers wide. At this scale, scientists
ClearSpot Energy | Open Energy Information
ClearSpot Energy Jump to: navigation, search Name: ClearSpot Energy Sector: Solar Product: US-based solar project developer for rooftop commercial installations. References:...
HotSpot | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
HotSpot HotSpot Current Central Registry Toolbox Version(s): 2.07.1 Code Owner: Department of Energy, Office of Emergency Operations and Lawrence Livermore National Laboratory (LLNL) Description: The HotSpot Health Physics Code is used for safety-analysis of DOE facilities handling nuclear material. Additionally, HotSpot provides emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. HotSpot
A procedure to determine the planar integral spot dose values of proton pencil beam spots
Anand, Aman; Sahoo, Narayan; Zhu, X. Ronald; Sawakuchi, Gabriel O.; Poenisch, Falk; Amos, Richard A.; Ciangaru, George; Titt, Uwe; Suzuki, Kazumichi; Mohan, Radhe; Gillin, Michael T.
2012-02-15
found to have angular anisotropy. This anisotropy in PPBS dose distribution could be accounted in a reasonable approximate manner by taking the average of PISD values obtained using the in-line and cross-line profiles. The PISD{sub RBPC} values fall within 3.5% of those measured by BPC. Due to inherent dosimetry challenges associated with PPBS dosimetry, which can lead to large experimental uncertainties, such an agreement is considered to be satisfactory for validation purposes. The PISD{sub full} values show differences ranging from 1 to 11% from BPC measured values, which are mainly due to the size limitation of the BPC to account for the dose in the long tail regions of the spots extending beyond its 4.08 cm radius. The dose in long tail regions occur both for high energy beams such as 221.8 MeV PPBS due to the contributions of nuclear interactions products in the medium, and for low energy PPBS because of their larger spot sizes. The calculated LTDCF values agree within 1% with those determined by the Monte Carlo (MC) simulations. Conclusions: The area integration method to compute the PISD from PPBS lateral dose profiles is found to be useful both to determine the correction factors for the values measured by the BPC and to validate the results from MC simulations.
McIlhany, K.; Whitehouse, D.; Smith, D.; Eisner, A.M.; Wang, Y.X.
1994-12-31
A Monte Carlo program describing the response of the Liquid Scintillation Neutrino Detector (LSND) at the Los Alamos Meson Physics Facility (LAMPF) was written using the GEANT geometry and simulation package. Neutrino interactions were simulated in the detector through the production of Cerenkov and scintillation light in the range of 2-3 eV. Since GEANT does not normally track photons to electron-volt energies, the tracking program (TRAK) was modified to produce both Cerenkov and scintillator light, the latter being simulated using the Birks equation. The LSND Monte Carlo program was used to predict the quantity of scintillator (b-PBD) used in the mineral oil to provide a ratio of roughly 4:1 light output resulting from scintillation and Cerenkov light respectively.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Quantum Monte Carlo for the Electronic Structure of Atoms and Molecules Brian Austin Lester Group, U.C. Berkeley BES Requirements Workshop Rockville, MD February 9, 2010 Outline Applying QMC to diverse chemical systems Select systems with high interest and impact Phenol: bond dissociation energy Retinal: excitation energy Algorithmic details Parallel Strategy Wave function evaluation O-H Bond Dissociation Energy of Phenol Ph-OH Ph-O * + H * (36 valence electrons)
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Hot Spot Removal System: System description
1997-09-01
Hazardous wastes contaminated with radionuclides, chemicals, and explosives exist across the Department of Energy complex and need to be remediated due to environmental concerns. Currently, an opportunity is being developed to dramatically reduce remediation costs and to assist in the acceleration of schedules associated with these wastes by deploying a Hot Spot Removal System. Removing the hot spot from the waste site will remove risk driver(s) and enable another, more cost effective process/option/remedial alternative (i.e., capping) to be applied to the remainder of the site. The Hot Spot Removal System consists of a suite of technologies that will be utilized to locate and remove source terms. Components of the system can also be used in a variety of other cleanup activities. This Hot Spot Removal System Description document presents technologies that were considered for possible inclusion in the Hot Spot Removal System, technologies made available to the Hot Spot Removal System, industrial interest in the Hot Spot Removal System`s subsystems, the schedule required for the Hot Spot Removal System, the evaluation of the relevant technologies, and the recommendations for equipment and technologies as stated in the Plan section.
HotSpot Software Configuration Management Plan
Walker, H; Homann, S G
2009-03-12
This Software Configuration Management Plan (SCMP) describes the software configuration management procedures used to ensure that the HotSpot dispersion model meets the requirements of its user base, which includes: (1) Users of the PC version of HotSpot for consequence assessment, hazard assessment and safety analysis calculations; and (2) Users of the NARAC Web and iClient software tools, which allow users to run HotSpot for consequence assessment modeling These users and sponsors of the HotSpot software and the organizations they represent constitute the intended audience for this document. This plan is intended to meet Critical Recommendations 1 and 3 from the Software Evaluation of HotSpot and DOE Safety Software Toolbox Recommendation for inclusion of HotSpot in the Department of Energy (DOE) Safety Software Toolbox. HotSpot software is maintained for the Department of Energy Office of Emergency Operations by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL). An overview of HotSpot and NARAC are provided.
Energy Science and Technology Software Center (OSTI)
2007-07-26
The TEVA-SPOT Toolkit (SPOT) supports the design of contaminant warning systems (CWSs) that use real-time sensors to detect contaminants in municipal water distribution networks. Specifically, SPOT provides the capability to select the locations for installing sensors in order to maximize the utility and effectiveness of the CWS. SPOT models the sensor placement process as an optimization problem, and the user can specify a wide range of performance objectives for contaminant warning system design, including populationmore » health effects, time to detection, extent of contamination, volume consumed and number of failed detections. For example, a SPOT user can integrate expert knowledge during the design process by specigying required sensor placements or designating network locations as forbidden. Further, cost considerations can be integrated by limiting the design with user-specified installation costs at each location.« less
Eolica Montes de Cierzo | Open Energy Information
Montes de Cierzo Jump to: navigation, search Name: Eolica Montes de Cierzo Place: Navarra, Spain Sector: Wind energy Product: Spanish wind farm developer in the region of Navarra....
Isotropic Monte Carlo Grain Growth
Energy Science and Technology Software Center (OSTI)
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Volume higher; spot price ranges widen
1994-11-01
This article is the October 1994 uranium market summary. During this reporting period, volume on the spot concentrates market doubled. Twelve deals took place: three in the spot concentrates market, one in the medium and long-term market, four in the conversion market, and four in the enrichment market. The restricted price range widened due to higher prices at the top end of the range, while the unrestricted price range widened because of lower prices at the bottom end. Spot conversion prices were higher, and enrichment prices were unchanged.
,"Weekly Henry Hub Natural Gas Spot Price (Dollars per Million...
U.S. Energy Information Administration (EIA) Indexed Site
Henry Hub Natural Gas Spot Price (Dollars per Million Btu)" ,"Click worksheet name or tab ... Data for" ,"Data 1","Weekly Henry Hub Natural Gas Spot Price (Dollars per ...
Investigations of initiation spot size effects
Clarke, Steven A; Akinci, Adrian A; Leichty, Gary; Schaffer, Timothy; Murphy, Michael J; Munger, Alan; Thomas, Keith A
2010-01-01
As explosive components become smaller, a greater understanding of the effect of initiation spot size on detonation becomes increasingly critical. A series of tests of the effect of initiation spot size will be described. A series of DOI (direct optical initiation) detonators with initiation spots sizes from {approx}50 um to 1000um have been tested to determine laser parameters for threshold firing of low density PETN pressings. Results will be compared with theoretical predictions. Outputs of the initiation source (DOI ablation) have been characterized by a suite of diagnostics including PDV and schlieren imaging. Outputs of complete detonators have been characterized using PDV, streak, and/or schlieren imaging. At present, we have not found the expected change in the threshold energy to spot size relationship for DOI type detonators found in similar earlier for projectiles, slappers and EBWs. New detonators designs (Type C) are currently being tested that will allow the determination of the threshold for spot sizes from 250 um to 105um, where we hope to see change in the threshold vs. spot size relationship. Also, one test of an extremely small diameter spot size (50um) has resulted in preliminary NoGo only results even at energy densities as much as 8 times the energy density of the threshold results presented here. This gives preliminary evidence that 50um spot may be beyond the critical initiation diameter. The constant threshold energy to spot size relationship in the data to date does however still give some insight into the initiation mechanism of DOI detonators. If the DOI initiation mechanism were a 1D mechanism similar to a slapper or a flyer impact, the expected inflection point in the graph would have been between 300um and 500um diameter spot size, within the range of the data presented here. The lack of that inflection point indicates that the DOI initiation mechanism is more likely a 2D mechanism similar to a sphere or rod projectile. We expect to
Energy Science and Technology Software Center (OSTI)
2010-03-02
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
Energy Science and Technology Software Center (OSTI)
2013-04-18
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating insidents involving redioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
Optimized nested Markov chain Monte Carlo sampling: theory (Conference...
Office of Scientific and Technical Information (OSTI)
Optimized nested Markov chain Monte Carlo sampling: theory Citation Details In-Document Search Title: Optimized nested Markov chain Monte Carlo sampling: theory Metropolis Monte ...
Exact Monte Carlo for molecules
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H2, and the singlet-triplet splitting in methylene are presented and discussed. 17 refs.
ATS Spotted MSI Analysis with Matlab
Energy Science and Technology Software Center (OSTI)
2012-02-09
Samples are placed on a surface using an acoustic transfer system (ATS). This results in one ore more small droplets on a surface. Typically there are hundreds to thousands of these droplets arrayed in a regular coordinate system. The surface is analyzed using mass spectrometry imaging (MSI) and at each position, one or more mass spectra are recorded. The purpose of the software is to help the user assign locations to the spots and buildmore » a report for each spot.« less
Mont Vista Capital LLC | Open Energy Information
Vista Capital LLC Jump to: navigation, search Name: Mont Vista Capital LLC Place: New York, New York Zip: 10167 Sector: Services Product: Mont Vista Capital is a leading global...
Monte Carlo Simulations of APEX
Xu, G.
1995-10-01
Monte Carlo simulationsof the APEX apparatus, a spectrometer designed to meausre positron-electron pairs produced in heavy-ion collisions, carried out using GEANT are reported. The results of these simulations are compared with data from measurements of conversion electron, positron and part emitting sources as well as with the results of in-beam measurements of positrons and electrons. The overall description of the performance of the apparatus is excellent.
Solar Renewable Energy Credits (SRECs) Spot Market Program
Broader source: Energy.gov [DOE]
NOTE: While interested parties can still trade DE SRECs in the spot market, the spot market in itself is limited since most of the SRECs produced are part of the SREC Purchase Program, or the SREC...
Fermi surface topology and hot spot distribution in the Kondo...
Office of Scientific and Technical Information (OSTI)
Fermi surface topology and hot spot distribution in the Kondo lattice system CeB 6 ... September 17, 2016 Title: Fermi surface topology and hot spot distribution in the Kondo ...
,"Henry Hub Natural Gas Spot Price (Dollars per Million Btu)...
U.S. Energy Information Administration (EIA) Indexed Site
12:23:06 PM" "Back to Contents","Data 1: Henry Hub Natural Gas Spot Price (Dollars per Million Btu)" "Sourcekey","RNGWHHD" "Date","Henry Hub Natural Gas Spot Price (Dollars per ...
,"Henry Hub Natural Gas Spot Price (Dollars per Million Btu)...
U.S. Energy Information Administration (EIA) Indexed Site
12:23:08 PM" "Back to Contents","Data 1: Henry Hub Natural Gas Spot Price (Dollars per Million Btu)" "Sourcekey","RNGWHHD" "Date","Henry Hub Natural Gas Spot Price (Dollars per ...
,"Henry Hub Natural Gas Spot Price (Dollars per Million Btu)...
U.S. Energy Information Administration (EIA) Indexed Site
12:23:12 PM" "Back to Contents","Data 1: Henry Hub Natural Gas Spot Price (Dollars per Million Btu)" "Sourcekey","RNGWHHD" "Date","Henry Hub Natural Gas Spot Price (Dollars per ...
Friction Stir Spot Welding of Advanced High Strength Steels II...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Friction Stir Spot Welding of Advanced High Strength Steels II Friction Stir Spot Welding of Advanced High Strength Steels II 2011 DOE Hydrogen and Fuel Cells Program, and Vehicle ...
Sweet Spot Supersymmetry and Composite Messengers
Ibe, Masahiro; Kitano, Ryuichiro
2007-10-30
Sweet spot supersymmetry is a phenomenologically and cosmologically perfect framework to realize a supersymmetric world at short distance. We discuss a class of dynamical models of supersymmetry breaking and its mediation whose low-energy effective description falls into this framework. Hadron fields in the dynamical models play a role of the messengers of the supersymmetry breaking. As is always true in the models of the sweet spot supersymmetry, the messenger scale is predicted to be 10{sup 5} GeV {approx}< M{sub mess} {approx}< 10{sup 10} GeV. Various values of the effective number of messenger fields N{sub mess} are possible depending on the choice of the gauge group.
Friction Stir Spot Welding of Advanced High Strength Steels
Hovanski, Yuri; Grant, Glenn J.; Santella, M. L.
2009-11-13
Friction stir spot welding techniques were developed to successfully join several advanced high strength steels. Two distinct tool materials were evaluated to determine the effect of tool materials on the process parameters and joint properties. Welds were characterized primarily via lap shear, microhardness, and optical microscopy. Friction stir spot welds were compared to the resistance spot welds in similar strength alloys by using the AWS standard for resistance spot welding high strength steels. As further comparison, a primitive cost comparison between the two joining processes was developed, which included an evaluation of the future cost prospects of friction stir spot welding in advanced high strength steels.
Energy Monte Carlo (EMCEE) | Open Energy Information
with a specific set of distributions. Both programs run as spreadsheet workbooks in Microsoft Excel. EMCEE and Emc2 require Crystal Ball, a commercially available Monte Carlo...
Spot market volume sluggish in October
1995-11-01
This article is the October 1995 uranium market summary. Spot market volume reached a meager 116,000 lbs of U3O8 and equivalent. The restricted price range increased from September`s high end of $11.75 to $12.00. The unrestricted range also increased with September`s high end of $9.65 as October`s low end and a high of $9.80. Conversion prices have held steady the past few months. However, the SWU price range increased this month to a high of $97.00.
Monte%20Carlo.jpg | OSTI, US Dept of Energy Office of Scientific and
Office of Scientific and Technical Information (OSTI)
Technical Information Monte%20Carlo
Monte-Carlo particle dynamics in a variable specific impulse...
Office of Scientific and Technical Information (OSTI)
Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket Citation Details In-Document Search Title: Monte-Carlo particle dynamics in a variable specific ...
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...
Office of Scientific and Technical Information (OSTI)
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and ...
Fundamentals of Monte Carlo (Technical Report) | SciTech Connect
Office of Scientific and Technical Information (OSTI)
Fundamentals of Monte Carlo Citation Details In-Document Search Title: Fundamentals of Monte Carlo Authors: Wollaber, Allan Benton 1 + Show Author Affiliations Los Alamos ...
A hybrid Monte Carlo method for equilibrium equation of state...
Office of Scientific and Technical Information (OSTI)
MONTE CARLO SIMULATION METHODS Benchmark for perturbation theory methods NPT, NVT - single ... EXPLOSIVES; MIXTURES; MONTE CARLO METHOD; PERTURBATION THEORY; SHOCK WAVES; SIMULATION
Jefferson Lab finds its man Mont (Inside Business) | Jefferson...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
https:www.jlab.orgnewsarticlesjefferson-lab-finds-its-man-mont-inside-business Jefferson Lab finds its man Mont Hugh Montgomery Hugh Montgomery, a British nuclear physicist...
Optimal sampling efficiency in Monte Carlo sampling with an approximat...
Office of Scientific and Technical Information (OSTI)
Journal Article: Optimal sampling efficiency in Monte Carlo sampling with an approximate potential Citation Details In-Document Search Title: Optimal sampling efficiency in Monte ...
Monte Carlo Ion Transport Analysis Code.
Energy Science and Technology Software Center (OSTI)
2009-04-15
Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments [OSTI]
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Friction Stir Spot Welding of DP780 Carbon Steel
Santella, M. L.; Hovanski, Yuri; Frederick, Alan; Grant, Glenn J.; Dahl, Michael E.
2009-09-15
Friction stir spot welds were made in uncoated and galvannneled DP780 sheets using polycrystalline boron nitride stir tools. The tools were plunged at either a single continuous rate or in two segments consisting of a relatively high rate followed by a slower rate of shorter depth. Welding times ranged from 1-10 s. Increasing tool rotation speed from 800 to 1600 rpm increased strength values. The 2-segment welding procedures also produced higher strength joints. Average lap-shear strengths exceeding 10.3 kN were consistently obtained in 4 s on both the uncoated and the galvannealed DP780. The likelihood of diffusion and mechanical interlocking contributing to bond formation was supported by metallographic examinations. A cost analysis based on spot welding in automobile assembly showed that for friction stir spot welding to be economically competitive with resistance spot welding the cost of stir tools must approach that of resistance spot welding electrode tips.
Finite Cosmology and a CMB Cold Spot
Adler, R.J.; Bjorken, J.D.; Overduin, J.M.; /Stanford U., HEPL
2006-03-20
The standard cosmological model posits a spatially flat universe of infinite extent. However, no observation, even in principle, could verify that the matter extends to infinity. In this work we model the universe as a finite spherical ball of dust and dark energy, and obtain a lower limit estimate of its mass and present size: the mass is at least 5 x 10{sup 23}M{sub {circle_dot}} and the present radius is at least 50 Gly. If we are not too far from the dust-ball edge we might expect to see a cold spot in the cosmic microwave background, and there might be suppression of the low multipoles in the angular power spectrum. Thus the model may be testable, at least in principle. We also obtain and discuss the geometry exterior to the dust ball; it is Schwarzschild-de Sitter with a naked singularity, and provides an interesting picture of cosmogenesis. Finally we briefly sketch how radiation and inflation eras may be incorporated into the model.
Hot spot-ridge crest convergence in the northeast Pacific
Karsten, J.L.; Delaney, J.R. )
1989-01-10
Evolution of the Juan de Fuca Ridge during the past 7 m.y. has been reconstructed taking into account both the propagating rift history and migration of the spreading center in the 'absolute' (fixed hot spot) reference frame. Northwestward migration of the spreading center (at a rate of 30 km/m.y.) has resulted in progressive encroachment of the ridge axis on the Cobb Hot Spot and westward jumping of the central third of the ridge axis more recently than 0.5 Ma. Seamounts in the Cobb-Eickelberg chain are predicted to display systematic variations in morphology and petrology, and a reduction in the age contrast between the edifice and underlying crust, as a result of the ridge axis approach. Relative seamount volumes also indicate that magmatic output of the hot spot varied during this interval, with a reduction in activity between 2.5 and 4.5 Ma, compared with relatively more robust activity before and after this period. Spatial relationships determined in this reconstruction allow hypotheses relating hot spot activity and rift propagation to be evaluated. In most cases, rift propagation has been directed away from the hot spot during the time period considered. Individual propagators show some reduction in propagation rate as separation between the propagating rift tip and hot spot increases, but cross comparison of multiple propagators does not uniformly display the same relationship. No obvious correlation exists between propagation rate and increasing proximity of the hot spot to the ridge axis or increasing hot spot output. Taken together, these observations do not offer compelling support for the concept of hot spot driven rift propagation. However, short-term reversals in propagation direction at the Cobb Offset coincide with activity of the Heckle melting anomaly, suggesting that local propagation effects may be related to excess magma supply at the ridge axis.
Wall and laser spot motion in cylindrical hohlraums
Huser, G.; Courtois, C.; Monteil, M.-C.
2009-03-15
Wall and laser spot motion measurements in empty, propane-filled and plastic (CH)-lined gold coated cylindrical hohlraums were performed on the Omega laser facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Wall motion was measured using axial two-dimensional (2D) x-ray imaging and laser spot motion was perpendicularly observed through a thinned wall using streaked hard x-ray imaging. Experimental results and 2D hydrodynamic simulations show that while empty targets exhibit on-axis plasma collision, CH-lined and propane-filled targets inhibit wall expansion, corroborated with perpendicular streaked imaging showing a slower motion of laser spots.
Forecasting Crude Oil Spot Price Using OECD Petroleum Inventory Levels
Reports and Publications (EIA)
2003-01-01
This paper presents a short-term monthly forecasting model of West Texas Intermediate crude oil spot price using Organization for Economic Cooperation and Development (OECD) petroleum inventory levels.
Unique Bioreactor Finds Algae's Sweet Spot - News Feature | NREL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Unique Bioreactor Finds Algae's Sweet Spot February 18, 2014 Close-up photo of a vial of green algae. Enlarge image Aeration helps algae grow and helps replicate real-life ...
Portsmouth Training Exercise Helps Radiological Trainees Spot Mistakes
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Safely | Department of Energy Portsmouth Training Exercise Helps Radiological Trainees Spot Mistakes Safely Portsmouth Training Exercise Helps Radiological Trainees Spot Mistakes Safely February 11, 2016 - 12:10pm Addthis Connie Martin performs work inside the Error Lab while trainees observe her actions for mistakes. Connie Martin performs work inside the Error Lab while trainees observe her actions for mistakes. Lorrie Graham (left) talks with trainees in a classroom setting before
Imager Spots and Samples Tiny Tumors | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Imager Spots and Samples Tiny Tumors Imager Spots and Samples Tiny Tumors NEWPORT NEWS, Va. Feb. 8, 2008 -- The positron emission mammography/tomography breast imaging and biopsy system was designed and constructed by scientists at Jefferson Lab, West Virginia University and the University of Maryland School of Medicine. The PEM/PET system is designed for detecting and guiding the biopsies of suspicious breast cancer lesions. "This is the most-important and most-difficult imager we've
Jefferson Lab Medical Imager Spots Breast Cancer | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Medical Imager Spots Breast Cancer PEM This PEM image shows two cancerous lesions. The one on the right was depicted by conventional mammography, but the one on the left was only identified by the PEM unit. Image courtesy: Eric Rosen, Duke University Medical Center Jefferson Lab Medical Imager Spots Breast Cancer March 3, 2005 Newport News, VA - A study published in the February issue of the journal Radiology shows that a positron emission mammography (PEM) device designed and built by Jefferson
On the burn topology of hot-spot-initiated reactions
Hill, Larry G; Zimmermann, Bjorn; Nichols, Albert L
2009-01-01
We determine the reaction progress function for an ideal hot spot model problem. The considered problem has an exact analytic solution that can derived from a reduction of Nichols statistical hot spot model. We perform numerical calculations to verify the analytic solution and to illustrate the error realized in real, finite systems. We show how the baseline problem, which does not distinguish between the reactant and product densities, can be scaled to handle general cases for which the two densities differ.
Wang, Dongxu Dirksen, Blake; Hyer, Daniel E.; Buatti, John M.; Sheybani, Arshin; Dinges, Eric; Felderman, Nicole; TenNapel, Mindi; Bayouth, John E.; Flynn, Ryan T.
2014-12-15
Purpose: To determine the plan quality of proton spot scanning (SS) radiosurgery as a function of spot size (in-air sigma) in comparison to x-ray radiosurgery for treating peripheral brain lesions. Methods: Single-field optimized (SFO) proton SS plans with sigma ranging from 1 to 8 mm, cone-based x-ray radiosurgery (Cone), and x-ray volumetric modulated arc therapy (VMAT) plans were generated for 11 patients. Plans were evaluated using secondary cancer risk and brain necrosis normal tissue complication probability (NTCP). Results: For all patients, secondary cancer is a negligible risk compared to brain necrosis NTCP. Secondary cancer risk was lower in proton SS plans than in photon plans regardless of spot size (p = 0.001). Brain necrosis NTCP increased monotonically from an average of 2.34/100 (range 0.42/100–4.49/100) to 6.05/100 (range 1.38/100–11.6/100) as sigma increased from 1 to 8 mm, compared to the average of 6.01/100 (range 0.82/100–11.5/100) for Cone and 5.22/100 (range 1.37/100–8.00/100) for VMAT. An in-air sigma less than 4.3 mm was required for proton SS plans to reduce NTCP over photon techniques for the cohort of patients studied with statistical significance (p = 0.0186). Proton SS plans with in-air sigma larger than 7.1 mm had significantly greater brain necrosis NTCP than photon techniques (p = 0.0322). Conclusions: For treating peripheral brain lesions—where proton therapy would be expected to have the greatest depth-dose advantage over photon therapy—the lateral penumbra strongly impacts the SS plan quality relative to photon techniques: proton beamlet sigma at patient surface must be small (<7.1 mm for three-beam single-field optimized SS plans) in order to achieve comparable or smaller brain necrosis NTCP relative to photon radiosurgery techniques. Achieving such small in-air sigma values at low energy (<70 MeV) is a major technological challenge in commercially available proton therapy systems.
Friction Stir Spot Welding of DP780 Carbon Steel
Santella, Michael L [ORNL; Hovanski, Yuri [ORNL; Frederick, David Alan [ORNL; Grant, Glenn J [ORNL; Dahl, Michael E [ORNL
2010-01-01
Friction stir spot welds were made in uncoated and galvannealed DP780 sheets using polycrystalline boron nitride stir tools. The tools were plunged at either a single continuous rate or in two segments consisting of a relatively high rate followed by a slower rate of shorter depth. Welding times ranged from 1 to 10 s. Increasing tool rotation speed from 800 to 1600 rev min{sup -1} increased strength values. The 2-segment welding procedures also produced higher strength joints. Average lap shear strengths exceeding 10 {center_dot} 3 kN were consistently obtained in 4 s on both the uncoated and the galvannealed DP780. The likelihood of diffusion and mechanical interlocking contributing to bond formation was supported by metallographic examinations. A cost analysis based on spot welding in automobile assembly showed that for friction stir spot welding to be economically competitive with resistance spot welding the cost of stir tools must approach that of resistance spot welding electrode tips.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-01-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
FRICTION STIR SPOT WELDING OF 6016 ALUMINUM ALLOY
Mishra, Rajiv S.; Webb, S.; Freeney, T. A.; Chen, Y. L.; Gayden, X.; Grant, Glenn J.; Herling, Darrell R.
2007-01-08
Friction stir spot welding (FSSW) of 6016 aluminum alloy was evaluated with conventional pin tool and new off-center feature tools. The off-center feature tool provides significant control over the joint area. The tool rotation rate was varied between 1000 and 2500 rpm. Maximum failure strength was observed in the tool rotation range of 1200-1500 rpm. The results are interpreted in the context of material flow in the joint and influence of thermal input on microstructural changes. The off-center feature tool concept opens up new possibilities for plunge-type friction stir spot welding.
Four decades of implicit Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wollaber, Allan B.
2016-04-25
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
REAL TIME ULTRASONIC ALUMINUM SPOT WELD MONITORING SYSTEM
Regalado, W. Perez; Chertov, A. M.; Maev, R. Gr.
2010-02-22
Aluminum alloys pose several properties that make them one of the most popular engineering materials: they have excellent corrosion resistance, and high weight-to-strength ratio. Resistance spot welding of aluminum alloys is widely used today but oxide film and aluminum thermal and electrical properties make spot welding a difficult task. Electrode degradation due to pitting, alloying and mushrooming decreases the weld quality and adjustment of parameters like current and force is required. To realize these adjustments and ensure weld quality, a tool to measure weld quality in real time is required. In this paper, a real time ultrasonic non-destructive evaluation system for aluminum spot welds is presented. The system is able to monitor nugget growth while the spot weld is being made. This is achieved by interpreting the echoes of an ultrasound transducer located in one of the welding electrodes. The transducer receives and transmits an ultrasound signal at different times during the welding cycle. Valuable information of the weld quality is embedded in this signal. The system is able to determine the weld nugget diameter by measuring the delays of the ultrasound signals received during the complete welding cycle. The article presents the system performance on aluminum alloy AA6022.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems...
Office of Scientific and Technical Information (OSTI)
Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems Citation Details In-Document Search Title: Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems You ...
Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems...
Office of Scientific and Technical Information (OSTI)
Conference: Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems Citation Details In-Document Search Title: Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas ...
South El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.0519548, -118.0467339 Show Map Loading map... "minzoom":false,"mapping...
North El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.1027861, -118.0242333 Show Map Loading map... "minzoom":false,"mapping...
Cluster expansion modeling and Monte Carlo simulation of alnico...
Office of Scientific and Technical Information (OSTI)
Accepted Manuscript: Cluster expansion modeling and Monte Carlo simulation of alnico 5-7 permanent magnets This content will become publicly available on March 5, 2016 Prev Next...
Mont Vernon, New Hampshire: Energy Resources | Open Energy Information
Mont Vernon, New Hampshire: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 42.8945294, -71.6742393 Show Map Loading map......
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series ...
Molecular Monte Carlo Simulations Using Graphics Processing Units...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors....
HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
fidelity simulation of a diverse range of kinetic systems. Available for thumbnail of Feynman Center (505) 665-9090 Email HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid...
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral...
Office of Scientific and Technical Information (OSTI)
Title: Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Authors: Lynn, J. E. ; Carlson, J. ; Epelbaum, E. ; Gandolfi, S. ; Gezerlis, A. ; Schwenk, A. ...
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
The Information Role of Spot Prices and Inventories
U.S. Energy Information Administration (EIA) Indexed Site
Information Role of Spot Prices and Inventories James L. Smith, Rex Thompson, and Thomas Lee June 24, 2014 Independent Statistics & Analysis www.eia.gov U.S. Energy Information Administration Washington, DC 20585 This paper is released to encourage discussion and critical comment. The analysis and conclusions expressed here are those of the authors and not necessarily those of the U.S. Energy Information Administration. WORKING PAPER SERIES June 2014 James L. Smith, Rex Thomas, and Thomas K.
X-ray focal spot locating apparatus and method
Gilbert, Hubert W.
1985-07-30
An X-ray beam finder for locating a focal spot of an X-ray tube includes a mass of X-ray opaque material having first and second axially-aligned, parallel-opposed faces connected by a plurality of substantially identical parallel holes perpendicular to the faces and a film holder for holding X-ray sensitive film tightly against one face while the other face is placed in contact with the window of an X-ray head.
Price convergence in North America natural gas spot markets
King, M.; Cuc, M.
1996-12-01
Government policy changes and subsequent regulatory actions in Canada and the United States (US) in the mid-1980s led to effective deregulation of the commodity market for natural gas. This was done by price deregulation, unbundling of pipeline services, and the fostering of a competitive market through equal and open access to pipeline transportation capacity by all suppliers and users. This paper attempts to measure the degree of price convergence in the North American natural gas spot markets. 38 refs.
Texas students win regional National Science Bowl competition, secure spot
National Nuclear Security Administration (NNSA)
in finals in nation's capital | National Nuclear Security Administration | (NNSA) Texas students win regional National Science Bowl competition, secure spot in finals in nation's capital Monday, March 21, 2016 - 10:22am NPO's Mark Padilla congratulates the winning Amarillo High School Team Black with their victory at the Pantex Science Bowl 2016. More than 200 students from 37 from High schools across the Texas Panhandle gathered together with a few hundred volunteers for a meeting and
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmoreand experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.less
SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS
Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D
2014-06-01
Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC.
Friction Stir Spot Welding of Advanced High Strength Steels
Hovanski, Yuri; Santella, M. L.; Grant, Glenn J.
2009-12-28
Friction stir spot welding was used to join two advanced high-strength steels using polycrystalline cubic boron nitride tooling. Numerous tool designs were employed to study the influence of tool geometry on weld joints produced in both DP780 and a hot-stamp boron steel. Tool designs included conventional, concave shouldered pin tools with several pin configurations; a number of shoulderless designs; and a convex, scrolled shoulder tool. Weld quality was assessed based on lap shear strength, microstructure, microhardness, and bonded area. Mechanical properties were functionally related to bonded area and joint microstructure, demonstrating the necessity to characterize processing windows based on tool geometry.
Algorithms for a spot price responding residential load controller
Schweppe, F.C.; Daryanian, B.; Tabors, R.D.
1989-05-01
Increased unbundling of electric utility services has become a major interest in the industry. This paper presents a description of the logic and structure for a set of spot price based algorithms designed for use in residential load control systems. The paper presents the functions to be fulfilled by such a price responding device and describes the end use devices available in residences and the control logics applicable to each. The paper concludes that there is a need to understand customer attitudes and acceptance in the design of the response strategies and in the design of the man machine interface.
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...
Office of Scientific and Technical Information (OSTI)
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Citation Details In-Document Search Title: Monte Carlo Implementation Of ...
Recent advances and future prospects for Monte Carlo
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.
Hot spot-derived shock initiation phenomena in heterogeneous nitromethane
Dattelbaum, Dana M; Sheffield, Stephen A; Stahl, David B; Dattelbaum, Andrew M
2009-01-01
The addition of solid silica particles to gelled nitromethane offers a tractable model system for interrogating the role of impedance mismatches as one type of hot spot 'seed' on the initiation behaviors of explosive formulations. Gas gun-driven plate impact experiments are used to produce well-defined shock inputs into nitromethane-silica mixtures containing size-selected silica beads at 6 wt%. The Pop-plots or relationships between shock input pressure and rundistance (or time)-to-detonation for mixtures containing small (1-4 {micro}m) and large (40 {micro}m) beads are presented. Overall, the addition of beads was found to influence the shock sensitivity of the mixtures, with the smaller beads being more sensitizing than the larger beads, lowering the shock initiation threshold for the same run distance to detonation compared with neat nitromethane. In addition, the use of embedded electromagnetic gauges provides detailed information pertaining to the mechanism of the build-up to detonation and associated reactive flow. Of note, an initiation mechanism characteristic of homogeneous liquid explosives, such as nitromethane, was observed in the nitromethane-40 {micro}m diameter silica samples at high shock input pressures, indicating that the influence of hot spots on the initiation process was minimal under these conditions.
March market review. [Spot market prices for uranium (1993)
Not Available
1993-04-01
The spot market price for uranium in unrestricted markets weakened further during March, and at month end, the NUEXCO Exchange Value had fallen $0.15, to $7.45 per pound U3O8. The Restricted American Market Penalty (RAMP) for concentrates increased $0.15, to $2.55 per pound U3O8. Ample UF6 supplies and limited demand led to a $0.50 decrease in the UF6 Value, to $25.00 per kgU as UF6, while the RAMP for UF6 increased $0.75, to $5.25 per kgU. Nine near-term uranium transactions were reported, totalling almost 3.3 million pounds equivalent U3O8. This is the largest monthly spot market volume since October 1992, and is double the volume reported in January and February. The March 31 Conversion Value was $4.25 per kgU as UF6. Beginning with the March 31 Value, NUEXCO now reports its Conversion Value in US dollars per kilogram of uranium (US$/kgU), reflecting current industry practice. The March loan market was inactive with no transactions reported. The Loan Rate remained unchanged at 3.0 percent per annum. Low demand and increased competition among sellers led to a one-dollar decrease in the SWU Value, to $65 per SWU, and the RAMP for SWU declined one dollar, to $9 per SWU.
Diode magnetic-field influence on radiographic spot size
Ekdahl, Carl A. Jr.
2012-09-04
Flash radiography of hydrodynamic experiments driven by high explosives is a well-known diagnostic technique in use at many laboratories. The Dual-Axis Radiography for Hydrodynamic Testing (DARHT) facility at Los Alamos was developed for flash radiography of large hydrodynamic experiments. Two linear induction accelerators (LIAs) produce the bremsstrahlung radiographic source spots for orthogonal views of each experiment ('hydrotest'). The 2-kA, 20-MeV Axis-I LIA creates a single 60-ns radiography pulse. For time resolution of the hydrotest dynamics, the 1.7-kA, 16.5-MeV Axis-II LIA creates up to four radiography pulses by slicing them out of a longer pulse that has a 1.6-{micro}s flattop. Both axes now routinely produce radiographic source spot sizes having full-width at half-maximum (FWHM) less than 1 mm. To further improve on the radiographic resolution, one must consider the major factors influencing the spot size: (1) Beam convergence at the final focus; (2) Beam emittance; (3) Beam canonical angular momentum; (4) Beam-motion blur; and (5) Beam-target interactions. Beam emittance growth and motion in the accelerators have been addressed by careful tuning. Defocusing by beam-target interactions has been minimized through tuning of the final focus solenoid for optimum convergence and other means. Finally, the beam canonical angular momentum is minimized by using a 'shielded source' of electrons. An ideal shielded source creates the beam in a region where the axial magnetic field is zero, thus the canonical momentum zero, since the beam is born with no mechanical angular momentum. It then follows from Busch's conservation theorem that the canonical angular momentum is minimized at the target, at least in principal. In the DARHT accelerators, the axial magnetic field at the cathode is minmized by using a 'bucking coil' solenoid with reverse polarity to cancel out whatever solenoidal beam transport field exists there. This is imperfect in practice, because of
Seven federally protected Mexican spotted owl chicks hatch on Los Alamos
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
National Laboratory property Federally protected Mexican spotted owl chicks hatch on LANL property Seven federally protected Mexican spotted owl chicks hatch on Los Alamos National Laboratory property Biologists located a record seven federally threatened Mexican spotted owl chicks on Los Alamos National Laboratory property during nest surveys last month. July 13, 2015 A parent owl sits with two chicks. A parent owl sits with two chicks. Contact Los Alamos National Laboratory Lorrie
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
Technical Report: Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons and Gamma Rays: Application to Thermal Neutron-Induced Fission Reactions on U-235 and Pu-239 ...
Generalizing the self-healing diffusion Monte Carlo approach...
Office of Scientific and Technical Information (OSTI)
Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases Citation Details In-Document Search ...
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials Previous Next List J. Kim and B. Smit, J. Chem. Theory Comput. 8 (7), 2336 (2012) DOI: 10.1021ct3003699 ...
Monte Carlo Hybrid Applied to Binary Stochastic Mixtures
Energy Science and Technology Software Center (OSTI)
2008-08-11
The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.
Fast Monte Carlo for radiation therapy: the PEREGRINE Project (Conference)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| SciTech Connect Fast Monte Carlo for radiation therapy: the PEREGRINE Project Citation Details In-Document Search Title: Fast Monte Carlo for radiation therapy: the PEREGRINE Project × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and technology. A paper copy
Multiscale MonteCarlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Modeling Hot-Spot Contributions in Shocked High Explosives at the Mesoscale
Harrier, Danielle
2015-08-12
When looking at performance of high explosives, the defects within the explosive become very important. Plastic bonded explosives, or PBXs, contain voids of air and bonder between the particles of explosive material that aid in the ignition of the explosive. These voids collapse in high pressure shock conditions, which leads to the formation of hot spots. Hot spots are localized high temperature and high pressure regions that cause significant changes in the way the explosive material detonates. Previously hot spots have been overlooked with modeling, but now scientists are realizing their importance and new modeling systems that can accurately model hot spots are underway.
Effects of High Shock Pressures and Pore Morphology on Hot Spot...
Office of Scientific and Technical Information (OSTI)
Effects of High Shock Pressures and Pore Morphology on Hot Spot Mechanisms in HMX Citation ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 75 ...
Spot Prices for Crude Oil and Petroleum Products
U.S. Energy Information Administration (EIA) Indexed Site
Spot Prices (Crude Oil in Dollars per Barrel, Products in Dollars per Gallon) Period: Daily Weekly Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Product by Area 08/30/16 08/31/16 09/01/16 09/02/16 09/05/16 09/06/16 View History Crude Oil WTI - Cushing, Oklahoma 46.32 44.68 43.17 44.39 44.39 44.85 1986-2016 Brent - Europe 47.94 47.94 45.05 45.96 46.72 46.21 1987-2016 Conventional Gasoline New York Harbor, Regular
Morphological changes in ultrafast laser ablation plumes with varying spot size
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Harilal, S. S.; Diwakar, P. K.; Polek, M. P.; Phillips, M. C.
2015-06-04
We investigated the role of spot size on plume morphology during ultrafast laser ablation of metal targets. Our results show that the spatial features of fs LA plumes are strongly dependent on the focal spot size. Two-dimensional self-emission images showed that the shape of the ultrafast laser ablation plumes changes from spherical to cylindrical with an increasing spot size from 100 to 600 ?m. The changes in plume morphology and internal structures are related to ion emission dynamics from the plasma, where broader angular ion distribution and faster ions are noticed for the smallest spot size used. The present resultsmoreclearly show that the morphological changes in the plume with spot size are independent of laser pulse width.less
Mechanism of cathode spot splitting in vacuum arcs in an oblique magnetic field
Beilis, I. I.
2015-10-15
Experiments in the last decade showed that for cathode spots in a magnetic field that obliquely intercepts the cathode surface, the current per spot increased with the transverse component of the magnetic field and decreased with the normal component. The present work analyzes the nature of cathode spot splitting in an oblique magnetic field. A physical model for cathode spot current splitting was developed, which considered the relation between the plasma kinetic pressure, self-magnetic pressure, and applied magnetic pressure in a current carrying cathode plasma jet. The current per spot was calculated, and it was found to increase with the tangential component of the magnetic field and to decrease with the normal component, which agrees well with the experimental dependence.
SU-E-J-72: Geant4 Simulations of Spot-Scanned Proton Beam Treatment Plans
Kanehira, T; Sutherland, K; Matsuura, T; Umegaki, K; Shirato, H
2014-06-01
Purpose: To evaluate density inhomogeneities which can effect dose distributions for real-time image gated spot-scanning proton therapy (RGPT), a dose calculation system, using treatment planning system VQA (Hitachi Ltd., Tokyo) spot position data, was developed based on Geant4. Methods: A Geant4 application was developed to simulate spot-scanned proton beams at Hokkaido University Hospital. A CT scan (0.98 × 0.98 × 1.25 mm) was performed for prostate cancer treatment with three or four inserted gold markers (diameter 1.5 mm, volume 1.77 mm3) in or near the target tumor. The CT data was read into VQA. A spot scanning plan was generated and exported to text files, specifying the beam energy and position of each spot. The text files were converted and read into our Geant4-based software. The spot position was converted into steering magnet field strength (in Tesla) for our beam nozzle. Individual protons were tracked from the vacuum chamber, through the helium chamber, steering magnets, dose monitors, etc., in a straight, horizontal line. The patient CT data was converted into materials with variable density and placed in a parametrized volume at the isocenter. Gold fiducial markers were represented in the CT data by two adjacent voxels (volume 2.38 mm3). 600,000 proton histories were tracked for each target spot. As one beam contained about 1,000 spots, approximately 600 million histories were recorded for each beam on a blade server. Two plans were considered: two beam horizontal opposed (90 and 270 degree) and three beam (0, 90 and 270 degree). Results: We are able to convert spot scanning plans from VQA and simulate them with our Geant4-based code. Our system can be used to evaluate the effect of dose reduction caused by gold markers used for RGPT. Conclusion: Our Geant4 application is able to calculate dose distributions for spot scanned proton therapy.
DRAMATIC CHANGE IN JUPITER'S GREAT RED SPOT FROM SPACECRAFT OBSERVATIONS
Simon, Amy A.; Wong, Michael H.; De Pater, Imke; Rogers, John H.; Orton, Glenn S.; Carlson, Robert W.; Asay-Davis, Xylar; Marcus, Philip S.
2014-12-20
Jupiter's Great Red Spot (GRS) is one of its most distinct and enduring features. Since the advent of modern telescopes, keen observers have noted its appearance and documented a change in shape from very oblong to oval, confirmed in measurements from spacecraft data. It currently spans the smallest latitude and longitude size ever recorded. Here we show that this change has been accompanied by an increase in cloud/haze reflectance as sensed in methane gas absorption bands, increased absorption at wavelengths shorter than 500nm, and increased spectral slope between 500 and 630nm. These changes occurred between 2012 and 2014, without a significant change in internal tangential wind speeds; the decreased size results in a 3.2day horizontal cloud circulation period, shorter than previously observed. As the GRS has narrowed in latitude, it interacts less with the jets flanking its north and south edges, perhaps allowing for less cloud mixing and longer UV irradiation of cloud and aerosol particles. Given its long life and observational record, we expect that future modeling of the GRS's changes, in concert with laboratory flow experiments, will drive our understanding of vortex evolution and stability in a confined flow field crucial for comparison with other planetary atmospheres.
May market review. [Spot market prices for uranium (1993)
Not Available
1993-06-01
Seven uranium transactions totalling nearly three million pounds equivalent U3O8 were reported during May, but only two, totalling less than 200 thousand pounds equivalent U3O8, involved concentrates. As no discretionary buying occurred during the month, and as near-term supply and demand were in relative balance, prices were steady, while both buyers and sellers appeared to be awaiting some new market development to signal the direction of future spot-market prices. The May 31, 1993, Exchange Value and the Restricted American market Penalty (RAMP) for concentrates were both unchanged at $7.10, and $2.95 per pound U3O8, respectively. NUEXCO's judgement was that transactions for significant quantities of uranium concentrates that were both deliverable in and intended for consumption in the USA could have been concluded on May 31 at $10.05 per pound U3O8. Two near-term concentrate transactions were reported in which one US utility purchased less than 200 thousand pounds equivalent U3O8 from two separate sellers. These sales occurred at price levels at or near the May 31 Exchange Value plus RAMP. No long-term uranium transactions were reported during May. Consequently, the UF6 Value decreased $0.20 to $24.30 per kgU as UF6, reflecting some weakening of the UF6 market outside the USA.
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS ALAMOS, N.M., April 21, 2015-Meghan Hill and Katelynn James of Santa Fe's Monte del Sol Charter Sol took the top prize in the 25 th New Mexico Supercomputing Challenge Tuesday at Los Alamos National Laboratory for their research project, "Using Concentrated Heat Systems to Shock the P53 Protein to Direct Cancer into
Electrophoretic extraction of proteins from two-dimensional electrophoresis gel spots
Zhang, Jian-Shi; Giometti, C.S.; Tollaksen, S.L.
1987-09-04
After two-dimensional electrophoresis of proteins or the like, resulting in a polyacrylamide gel slab having a pattern of protein gel spots thereon, an individual protein gel spot is cored out from the slab, to form a gel spot core which is placed in an extraction tube, with a dialysis membrane across the lower end of the tube. Replicate gel spots can be cored out from replicate gel slabs and placed in the extraction tube. Molten agarose gel is poured into the extraction tube where the agarose gel hardens to form an immobilizing gel, covering the gel spot cores. The upper end portion of the extraction tube is filled with a volume of buffer solution, and the upper end is closed by another dialysis membrane. Upper and lower bodies of a buffer solution are brought into contact with the upper and lower membranes and are provided with electrodes connected to the positive and negative terminals of a dc power supply, thereby producing an electrical current which flows through the upper membrane, the volume of buffer solution, the agarose, the gel spot cores and the lower membrane. The current causes the proteins to be extracted electrophoretically from the gel spot cores, so that the extracted proteins accumulate and are contained in the space between the agarose gel and the upper membrane. 8 figs.
WE-E-BRE-04: Dual Focal Spot Dose Painting for Precision Preclinical Radiobiological Investigations
Stewart, J; Lindsay, P; Jaffray, D
2014-06-15
Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mm circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations
April market review. [Spot market prices for uranium (1993)
Not Available
1993-05-01
The spot market price for uranium outside the USA weakened further during April, and at month end, the NUEXCO Exchange Value had fallen $0.35, to $7.10 per pound U3O8. This is the lowest Exchange Value observed in nearly twenty years, comparable to Values recorded during the low price levels of the early 1970s. The Restricted American Market Penalty (RAMP) for concentrates increased $0.40, to $2.95 per pound U3O8. Transactions for significant quantities of uranium concentrates that are both deliverable in and intended for consumption in the USA could have been concluded on April 30 at $10.05 per pound U3O8, up $0.05 from the sum of corresponding March Values. Four near-term concentrates transactions were reported, totalling nearly 1.5 million pounds equivalent U3O8. One long-term sale was reported. The UF6 Value also declined, as increased competition among sellers led to a $0.50 decrease, to $24.50 per kgU as UF6. However, the RAMP for UF6 increased $0.65, to $5.90 per kgU as UF6, reflecting an effective US market level of $30.40 per kgU. Two near term transactions were reported totalling approximately 1.1 million pounds equivalent U3O8. In total, eight uranium transactions totalling 28 million pounds equivalent U3O8 were reported, which is about average for April market activity.
Bayesian Monte Carlo Method for Nuclear Data Evaluation
Koning, A.J.
2015-01-15
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.
Monte Carlo event generators for hadron-hadron collisions
Knowles, I.G.; Protopopescu, S.D.
1993-06-01
A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal...
Office of Scientific and Technical Information (OSTI)
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: ... Title: Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: ...
Soci t d exploitation du parc olien de Mont d H z cques SARL...
Soci t d exploitation du parc olien de Mont d H z cques SARL Jump to: navigation, search Name: Socit d'exploitation du parc olien de Mont d'Hzcques SARL Place:...
Calculation of the fast ion tail distribution for a spherically symmetric hot spot
McDevitt, C. J.; Tang, X.-Z.; Guo, Z.; Berk, H. L.
2014-10-15
The fast ion tail for a spherically symmetric hot spot is computed via the solution of a simplified Fokker-Planck collision operator. Emphasis is placed on describing the energy scaling of the fast ion distribution function in the hot spot as well as the surrounding cold plasma throughout a broad range of collisionalities and temperatures. It is found that while the fast ion tail inside the hot spot is significantly depleted, leading to a reduction of the fusion yield in this region, a surplus of fast ions is observed in the neighboring cold plasma region. The presence of this surplus of fast ions in the neighboring cold region is shown to result in a partial recovery of the fusion yield lost in the hot spot.
Assessment of Prices of Natural Gas Futures Contracts As A Predictor of Realized Spot Prices, An
Reports and Publications (EIA)
2005-01-01
This article compares realized Henry Hub spot market prices for natural gas during the three most recent winters with futures prices as they evolve from April through the following February, when trading for the March contract ends.
DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of Energy Office
Office of Scientific and Technical Information (OSTI)
of Scientific and Technical Information Monte Carlo Methods Monte Carlo calculation methods are algorithms for solving various kinds of computational problems by using (pseudo)random numbers. Developed in the 1940s during the Manhattan Project, the Monte Carlo method signified a radical change in how scientists solved problems. Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff. Image
Electrophoretic extraction of proteins from two-dimensional electrophoresis gel spots
Zhang, Jian-Shi; Giometti, Carol S.; Tollaksen, Sandra L.
1989-01-01
After two-dimensional electrophoresis of proteins or the like, resulting in a polyacrylamide gel slab having a pattern of protein gel spots thereon, an individual protein gel spot is cored out from the slab, to form a gel spot core which is placed in an extraction tube, with a dialysis membrane across the lower end of the tube. Replicate gel spots can be cored out from replicate gel slabs and placed in the extraction tube. Molten agarose gel is poured into the extraction tube where the agarose gel hardens to form an immobilizing gel, covering the gel spot cores. The upper end portion of the extraction tube is filled with a volume of buffer solution, and the upper end is closed by another dialysis membrane. Upper and lower bodies of a buffer solution are brought into contact with the upper and lower membranes and are provided with electrodes connected to the positive and negative terminals of a DC power supply, thereby producing an electrical current which flows through the upper membrane, the volume of buffer solution, the agarose, the gel spot cores and the lower membrane. The current causes the proteins to be extracted electrophoretically from the gel spot cores, so that the extracted proteins accumulate and are contained in the space between the agarose gel and the upper membrane. A high percentage extraction of proteins is achieved. The extracted proteins can be removed and subjected to partial digestion by trypsin or the like, followed by two-dimensional electrophoresis, resulting in a gel slab having a pattern of peptide gel spots which can be cored out and subjected to electrophoretic extraction to extract individual peptides.
Spot size dependence of laser accelerated protons in thin multi-ion foils
Liu, Tung-Chang Shao, Xi; Liu, Chuan-Sheng; Eliasson, Bengt; Wang, Jyhpyng; Chen, Shih-Hung
2014-06-15
We present a numerical study of the effect of the laser spot size of a circularly polarized laser beam on the energy of quasi-monoenergetic protons in laser proton acceleration using a thin carbon-hydrogen foil. The used proton acceleration scheme is a combination of laser radiation pressure and shielded Coulomb repulsion due to the carbon ions. We observe that the spot size plays a crucial role in determining the net charge of the electron-shielded carbon ion foil and consequently the efficiency of proton acceleration. Using a laser pulse with fixed input energy and pulse length impinging on a carbon-hydrogen foil, a laser beam with smaller spot sizes can generate higher energy but fewer quasi-monoenergetic protons. We studied the scaling of the proton energy with respect to the laser spot size and obtained an optimal spot size for maximum proton energy flux. Using the optimal spot size, we can generate an 80 MeV quasi-monoenergetic proton beam containing more than 10{sup 8} protons using a laser beam with power 250 TW and energy 10 J and a target of thickness 0.15 wavelength and 49 critical density made of 90% carbon and 10% hydrogen.
Wang, Z; Gao, M
2014-06-01
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 1010cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.
The Monte Carlo Independent Column Approximation Model Intercomparison
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Project (McMIP) The Monte Carlo Independent Column Approximation Model Intercomparison Project (McMIP) Barker, Howard Meteorological Service of Canada Cole, Jason Meteorological Service of Canada Raisanen, Petri Finnish Meteorological Institute Pincus, Robert NOAA-CIRES Climate Diagnostics Center Morcrette, Jean-Jacques European Centre for Medium-Range Weather Forecasts Li, Jiangnan Canadian Center for Climate Modelling Stephens, Graeme Colorado State University Vaillancourt, Paul
Quantum Monte Carlo Calculations in Nuclear Theory | Argonne Leadership
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Computing Facility Blue Gene/Q scaling This figure shows Blue Gene/Q scaling with respect to increasing number of nodes for calculations of the first isospin-1 state of $^{12}$C. The good multinode scaling is a result of the ADLB library. Quantum Monte Carlo Calculations in Nuclear Theory PI Name: Steven Pieper PI Email: spieper@anl.gov Institution: Argonne National Laboratory Allocation Program: ESP Year: 2015 Research Domain: Physics Tier 2 Code Development Project Numerical
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Monte Carlo calculations for r-process nucleosynthesis
Mumpower, Matthew Ryan
2015-11-12
A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.
Olson, R. E.; Leeper, R. J.
2013-09-15
The baseline DT ice layer inertial confinement fusion (ICF) ignition capsule design requires a hot spot convergence ratio of ∼34 with a hot spot that is formed from DT mass originally residing in a very thin layer at the inner DT ice surface. In the present paper, we propose alternative ICF capsule designs in which the hot spot is formed mostly or entirely from mass originating within a spherical volume of DT vapor. Simulations of the implosion and hot spot formation in two DT liquid layer ICF capsule concepts—the DT wetted hydrocarbon (CH) foam concept and the “fast formed liquid” (FFL) concept—are described and compared to simulations of standard DT ice layer capsules. 1D simulations are used to compare the drive requirements, the optimal shock timing, the radial dependence of hot spot specific energy gain, and the hot spot convergence ratio in low vapor pressure (DT ice) and high vapor pressure (DT liquid) capsules. 2D simulations are used to compare the relative sensitivities to low-mode x-ray flux asymmetries in the DT ice and DT liquid capsules. It is found that the overall thermonuclear yields predicted for DT liquid layer capsules are less than yields predicted for DT ice layer capsules in simulations using comparable capsule size and absorbed energy. However, the wetted foam and FFL designs allow for flexibility in hot spot convergence ratio through the adjustment of the initial cryogenic capsule temperature and, hence, DT vapor density, with a potentially improved robustness to low-mode x-ray flux asymmetry.
An investigation of the dynamic separation of spot welds under plane tensile pulses
Ma, Bohan; Fan, Chunlei; Chen, Danian Wang, Huanran; Zhou, Fenghua
2014-08-07
We performed ultra-high-speed tests for purely opening spot welds using plane tensile pulses. A gun system generated a parallel impact of a projectile plate onto a welded plate. Induced by the interactions of the release waves, the welded plate opened purely under the plane tensile pulses. We used the laser velocity interferometer system for any reflector to measure the velocity histories of the free surfaces of the free part and the spot weld of the welded plate. We then used a scanning electron microscope to investigate the recovered welded plates. We found that the interfacial failure mode was mainly a brittle fracture and the cracks propagated through the spot nugget, while the partial interfacial failure mode was a mixed fracture comprised ductile fracture and brittle fracture. We used the measured velocity histories to evaluate the tension stresses in the free part and the spot weld of the welded plate by applying the characteristic theory. We also discussed the different constitutive behaviors of the metals under plane shock loading and under uniaxial split Hopkinson pressure bar tests. We then compared the numerically simulated velocity histories of the free surfaces of the free part and the spot weld of the welded plate with the measured results. The numerical simulations made use of the fracture stress criteria, and then the computed fracture modes of the tests were compared with the recovered results.
Real-time spot size camera for pulsed high-energy radiographic machines
Watson, S.A.
1993-06-01
The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory`s Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison.
Real-time spot size camera for pulsed high-energy radiographic machines
Watson, S.A.
1993-01-01
The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory's Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison.
Six years of monitoring annual changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1992-12-01
Fifteen dates of spring-time SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data from spring 1987 through spring 1992 were analyzed to monitor annual changes in a 150-hectare, southeastern floodplain marsh. The marsh underwent rapid changes during the six years from a swamp dominated by non-persistent, thermally tolerant macrophytes to persistent macrophyte and shrub-scrub communities as reactor discharges declined to Pen Branch. Savannah River flooding was also important in the timing of the shift of these wetland communities. SPOT HRV data proved to be an efficient and effective method to monitor trends in these wetland community changes.
Six years of monitoring annual changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1992-01-01
Fifteen dates of spring-time SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data from spring 1987 through spring 1992 were analyzed to monitor annual changes in a 150-hectare, southeastern floodplain marsh. The marsh underwent rapid changes during the six years from a swamp dominated by non-persistent, thermally tolerant macrophytes to persistent macrophyte and shrub-scrub communities as reactor discharges declined to Pen Branch. Savannah River flooding was also important in the timing of the shift of these wetland communities. SPOT HRV data proved to be an efficient and effective method to monitor trends in these wetland community changes.
Electron depletion via cathode spot dispersion of dielectric powder into an overhead plasma
Gillman, Eric D.; Foster, John E.
2013-11-15
The effectiveness of cathode spot delivered dielectric particles for the purpose of plasma depletion is investigated. Here, cathode spot flows kinetically entrain and accelerate dielectric particles originally at rest into a background plasma. The time variation of the background plasma density is tracked using a cylindrical Langmuir probe biased approximately at electron saturation. As inferred from changes in the electron saturation current, depletion fractions of up to 95% are observed. This method could be exploited as a means of communications blackout mitigation for manned and unmanned reentering spacecraft as well as any high speed vehicle enveloped by a dense plasma layer.
Effects of minimum monitor unit threshold on spot scanning proton plan quality
Howard, Michelle Beltran, Chris; Mayo, Charles S.; Herman, Michael G.
2014-09-15
Purpose: To investigate the influence of the minimum monitor unit (MU) on the quality of clinical treatment plans for scanned proton therapy. Methods: Delivery system characteristics limit the minimum number of protons that can be delivered per spot, resulting in a min-MU limit. Plan quality can be impacted by the min-MU limit. Two sites were used to investigate the impact of min-MU on treatment plans: pediatric brain tumor at a depth of 5–10 cm; a head and neck tumor at a depth of 1–20 cm. Three-field, intensity modulated spot scanning proton plans were created for each site with the following parameter variations: min-MU limit range of 0.0000–0.0060; and spot spacing range of 2–8 mm. Comparisons were based on target homogeneity and normal tissue sparing. For the pediatric brain, two versions of the treatment planning system were also compared to judge the effects of the min-MU limit based on when it is accounted for in the optimization process (Eclipse v.10 and v.13, Varian Medical Systems, Palo Alto, CA). Results: The increase of the min-MU limit with a fixed spot spacing decreases plan quality both in homogeneous target coverage and in the avoidance of critical structures. Both head and neck and pediatric brain plans show a 20% increase in relative dose for the hot spot in the CTV and 10% increase in key critical structures when comparing min-MU limits of 0.0000 and 0.0060 with a fixed spot spacing of 4 mm. The DVHs of CTVs show min-MU limits of 0.0000 and 0.0010 produce similar plan quality and quality decreases as the min-MU limit increases beyond 0.0020. As spot spacing approaches 8 mm, degradation in plan quality is observed when no min-MU limit is imposed. Conclusions: Given a fixed spot spacing of ≤4 mm, plan quality decreases as min-MU increased beyond 0.0020. The effect of min-MU needs to be taken into consideration while planning proton therapy treatments.
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.
2012-07-01
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)
Optimization of Gutzwiller wave functions in quantum Monte Carlo
Koch, E.; Gunnarsson, O.; Martin, R.M.
1999-06-01
Gutzwiller functions are popular variational wave functions for correlated electrons in Hubbard models. Following the variational principle, we are interested in the Gutzwiller parameters that minimize, e.g., the expectation value of the energy. Rewriting the expectation value as a rational function in the Gutzwiller parameters, we find a very efficient way for performing that minimization. The method can be used to optimize general Gutzwiller-type wave functions both in variational and in fixed-node diffusion Monte Carlo. {copyright} {ital 1999} {ital The American Physical Society}
Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He
Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.
2005-09-30
A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.
Communication: Water on hexagonal boron nitride from diffusion Monte Carlo
Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alf, Dario; Lilienfeld, O. Anatole von
2015-05-14
Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of ?84 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.
Cluster Monte Carlo simulations of the nematic-isotropic transition
Priezjev, N. V.; Pelcovits, Robert A.
2001-06-01
We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (OSTI)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (OSTI)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less
Monte Carlo Fundamentals E B. BROWN and T M. S N
Office of Scientific and Technical Information (OSTI)
or reflect those of the United States Government or any ... Monte Carlo approach: Generate a sequence of states, (pol ... partide from birth to death During the particle ...
OSTIblog Articles in the Monte Carlo Topic | OSTI, US Dept of Energy Office
Office of Scientific and Technical Information (OSTI)
of Scientific and Technical Information Monte Carlo Topic The Unbelievable Accuracy of the Monte Carlo Method by Kathy Chambers 18 Jan, 2013 in Science Communications 4680 Monte%20Carlo.jpg The Unbelievable Accuracy of the Monte Carlo Method Read more about 4680 The year was 1945, the year I was born. That in itself is of great significance to me. However, it was a momentous year in history. World War II came to its merciful end and the development of the first electronic computer - the
Monte Carlo Bayesian search for the plausible source of the Telescope...
Office of Scientific and Technical Information (OSTI)
Title: Monte Carlo Bayesian search for the plausible source of the Telescope Array hotspot Authors: He, Hao-Ning ; Kusenko, Alexander ; Nagataki, Shigehiro ; Zhang, Bin-Bin ; Yang, ...
Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy
Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W
2002-02-19
Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Casper, Katya M.; Beresh, Steven J.; Schneider, Steven P.
2014-09-09
To investigate the pressure-fluctuation field beneath turbulent spots in a hypersonic boundary layer, a study was conducted on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Controlled disturbances were created by pulsed-glow perturbations based on the electrical breakdown of air. Under quiet-flow conditions, the nozzle-wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large disturbances that can be well-resolved with high-frequency pressure transducers. A disturbance first grows into a second-mode instability wavepacket that is concentrated near its own centreline. Weaker disturbances are seen spreading from the centre. The wavesmore » grow and become nonlinear before breaking down to turbulence. The breakdown begins in the core of the packets where the wave amplitudes are largest. Second-mode waves are still evident in front of and behind the breakdown point and can be seen propagating in the spanwise direction. The turbulent core grows downstream, resulting in a spot with a classical arrowhead shape. Behind the spot, a low-pressure calmed region develops. However, the spot is not merely a localized patch of turbulence; instability waves remain an integral part. Limited measurements of naturally occurring disturbances show many similar characteristics. From the controlled disturbance measurements, the convection velocity, spanwise spreading angle, and typical pressure-fluctuation field were obtained.« less
On the mechanism of operation of a cathode spot cell in a vacuum arc
Mesyats, G. A.; Petrov, A. A.; Bochkarev, M. B.; Barengolts, S. A.
2014-05-05
The erosive structures formed on a tungsten cathode as a result of the motion of the cathode spot of a vacuum arc over the cathode surface have been examined. It has been found that the average mass of a cathode microprotrusion having the shape of a solidified jet is approximately equal to the mass of ions removed from the cathode within the lifetime of a cathode spot cell carrying a current of several amperes. The time of formation of a new liquid-metal jet under the action of the reactive force of the plasma ejected by the cathode spot is about 10?ns, which is comparable to the lifetime of a cell. The growth rate of a liquid-metal jet is ?10{sup 4}?cm/s. The geometric shape and size of a solidified jet are such that a new explosive emission center (spot cell) can be initiated within several nanoseconds during the interaction of the jet with the dense cathode plasma. This is the underlying mechanism of the self-sustained operation of a vacuum arc.
Casper, Katya M.; Beresh, Steven J.; Schneider, Steven P.
2014-09-09
To investigate the pressure-fluctuation field beneath turbulent spots in a hypersonic boundary layer, a study was conducted on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Controlled disturbances were created by pulsed-glow perturbations based on the electrical breakdown of air. Under quiet-flow conditions, the nozzle-wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large disturbances that can be well-resolved with high-frequency pressure transducers. A disturbance first grows into a second-mode instability wavepacket that is concentrated near its own centreline. Weaker disturbances are seen spreading from the centre. The waves grow and become nonlinear before breaking down to turbulence. The breakdown begins in the core of the packets where the wave amplitudes are largest. Second-mode waves are still evident in front of and behind the breakdown point and can be seen propagating in the spanwise direction. The turbulent core grows downstream, resulting in a spot with a classical arrowhead shape. Behind the spot, a low-pressure calmed region develops. However, the spot is not merely a localized patch of turbulence; instability waves remain an integral part. Limited measurements of naturally occurring disturbances show many similar characteristics. From the controlled disturbance measurements, the convection velocity, spanwise spreading angle, and typical pressure-fluctuation field were obtained.
Friction Stir Spot Welding of DP780 and Hot-Stamp Boron Steels
Santella, Michael L.; Frederick, Alan; Hovanski, Yuri; Grant, Glenn J.
2008-05-16
Friction stir spot welds were made in two high-strength steels: DP780, and a hot-stamp-boron steel with tensile strength of 1500 MPa. The spot welds were made at either 800 or 1600 rpm using either of two polycrystalline boron nitride tools. One stir tool, BN77, had the relatively common pin-tool shape. The second tool, BN46, had a convex rather than a concave shoulder profile and a much wider and shorter pin. The tools were plunged to preprogrammed depths either at a continuous rate (1-step schedule) or in two segments consisting of a relatively high rate followed by a slower rate. In all cases, the welds were completed in 4s. The range of lap-shear values were compared to values required for resistance spot welds on the same steels. The minimum value of 10.3 kN was exceeded for friction stir spot welding of DP780 using a 2-step schedule and either the BN77- or the BN46-type stir tool. The respective minimum value of 12 kN was also exceeded for the HSB steel using the 2-step process and the BN46 stir tool.
Jacobi, Rober
2007-03-28
This Topical Report (#6 of 9) consists of the figures 3.6-13 to (and including) 3.6-18 (and appropriate figure captions) that accompany the Final Technical Progress Report entitled: “Innovative Methodology for Detection of Fracture-Controlled Sweet Spots in the Northern Appalachian Basin” for DOE/NETL Award DE-AC26-00NT40698.
Jacobi, Rober
2007-03-31
This Topical Report (#6 of 9) consists of the figures 3.6-13 to (and including) 3.6-18 (and appropriate figure captions) that accompany the Final Technical Progress Report entitled: "Fracture-Controlled Sweet Spots in the Northern Appalachian Basin” for DOE/NETL Award DE-AC26-00NT40698.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hybrid Deterministic/Monte Carlo Solutions to the Neutron Transport k-Eigenvalue Problem with a Comparison to Pure Monte Carlo Solutions Jeffrey A. Willert Los Alamos National Laboratory September 16, 2013 Joint work with: Dana Knoll (LANL), Ryosuke Park (LANL), and C. T. Kelley (NCSU) Jeffrey A. Willert Hybrid k-Eigenvalue Methods September 16, 2013 1 / 25 CASL-U-2013-0309-000 1 Introduction 2 Nonlinear Diffusion Acceleration for k-Eigenvalue Problems 3 Hybrid Methods 4 Classic Monte Carlo
Hot-spot mix in ignition-scale implosions on the NIF
Regan, S. P.; Epstein, R.; McCrory, R. L.; Meyerhofer, D. D.; Sangster, T. C.; Hammel, B. A.; Suter, L. J.; Ralph, J.; Scott, H.; Barrios, M. A.; Bradley, D. K.; Callahan, D. A.; Cerjan, C.; Collins, G. W.; Dixit, S. N.; Doeppner, T.; Edwards, M. J.; Farley, D. R.; Glenn, S.; Glenzer, S. H.; and others
2012-05-15
Ignition of an inertial confinement fusion (ICF) target depends on the formation of a central hot spot with sufficient temperature and areal density. Radiative and conductive losses from the hot spot can be enhanced by hydrodynamic instabilities. The concentric spherical layers of current National Ignition Facility (NIF) ignition targets consist of a plastic ablator surrounding a thin shell of cryogenic thermonuclear fuel (i.e., hydrogen isotopes), with fuel vapor filling the interior volume [S. W. Haan et al., Phys. Plasmas 18, 051001 (2011)]. The Rev. 5 ablator is doped with Ge to minimize preheat of the ablator closest to the DT ice caused by Au M-band emission from the hohlraum x-ray drive [D. S. Clark et al., Phys. Plasmas 17, 052703 (2010)]. Richtmyer-Meshkov and Rayleigh-Taylor hydrodynamic instabilities seeded by high-mode () ablator-surface perturbations can cause Ge-doped ablator to mix into the interior of the shell at the end of the acceleration phase [B. A. Hammel et al., Phys. Plasmas 18, 056310 (2011)]. As the shell decelerates, it compresses the fuel vapor, forming a hot spot. K-shell line emission from the ionized Ge that has penetrated into the hot spot provides an experimental signature of hot-spot mix. The Ge emission from tritium-hydrogen-deuterium (THD) and deuterium-tritium (DT) cryogenic targets and gas-filled plastic-shell capsules, which replace the THD layer with a mass-equivalent CH layer, was examined. The inferred amount of hot-spot-mix mass, estimated from the Ge K-shell line brightness using a detailed atomic physics code [J. J. MacFarlane et al., High Energy Density Phys. 3, 181 (2006)], is typically below the 75-ng allowance for hot-spot mix [S. W. Haan et al., Phys. Plasmas 18, 051001 (2011)]. Predictions of a simple mix model, based on linear growth of the measured surface-mass modulations, are consistent with the experimental results.
Srinivasan, Bhuvana; Tang, Xian-Zhu
2014-10-15
In an inertial confinement fusion target, energy loss due to thermal conduction from the hot-spot will inevitably ablate fuel ice into the hot-spot, resulting in a more massive but cooler hot-spot, which negatively impacts fusion yield. Hydrodynamic mix due to Rayleigh-Taylor instability at the gas-ice interface can aggravate the problem via an increased gas-ice interfacial area across which energy transfer from the hot-spot and ice can be enhanced. Here, this mix-enhanced transport effect on hot-spot fusion-performance degradation is quantified using contrasting 1D and 2D hydrodynamic simulations, and its dependence on effective acceleration, Atwood number, and ablation speed is identified.
Barrios, M. A.; Suter, L. J.; Glenn, S.; Benedetti, L. R.; Bradley, D. K.; Collins, G. W.; Hammel, B. A.; Izumi, N.; Ma, T.; Scott, H.; Smalyuk, V. A.; Regan, S. P.; Epstein, R.; Kyrala, G. A.
2013-07-15
Bright spots in the hot spot intensity profile of gated x-ray images of ignition-scale implosions at the National Ignition Facility [G. H. Miller et al., Opt. Eng. 443, (2004)] are observed. X-ray images of cryogenically layered deuterium-tritium (DT) and tritium-hydrogen-deuterium (THD) ice capsules, and gas filled plastic shell capsules (Symcap) were recorded along the hohlraum symmetry axis. Heterogeneous mixing of ablator material and fuel into the hot spot (i.e., hot-spot mix) by hydrodynamic instabilities causes the bright spots. Hot-spot mix increases the radiative cooling of the hot spot. Fourier analysis of the x-ray images is used to quantify the evolution of bright spots in both x- and k-space. Bright spot images were azimuthally binned to characterize bright spot location relative to known isolated defects on the capsule surface. A strong correlation is observed between bright spot location and the fill tube for both Symcap and cryogenically layered DT and THD ice targets, indicating the fill tube is a significant seed for the ablation front instability causing hot-spot mix. The fill tube is the predominant seed for Symcaps, while other capsule non-uniformities are dominant seeds for the cryogenically layered DT and THD ice targets. A comparison of the bright spot power observed for Si- and Ge-doped ablator targets shows heterogeneous mix in Symcap targets is mostly material from the doped ablator layer.
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-Andr
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 201090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (201090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [Diagnostic x-ray shielding design based on an empirical model of photon attenuation, Health Phys. 44, 507517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities, Med. Phys. 34, 13981404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of
A study of Monte Carlo radiative transfer through fractal clouds
Gautier, C.; Lavallec, D.; O`Hirok, W.; Ricchiazzi, P.
1996-04-01
An understanding of radiation transport (RT) through clouds is fundamental to studies of the earth`s radiation budget and climate dynamics. The transmission through horizontally homogeneous clouds has been studied thoroughly using accurate, discreet ordinates radiative transfer models. However, the applicability of these results to general problems of global radiation budget is limited by the plane parallel assumption and the fact that real clouds fields show variability, both vertically and horizontally, on all size scales. To understand how radiation interacts with realistic clouds, we have used a Monte Carlo radiative transfer model to compute the details of the photon-cloud interaction on synthetic cloud fields. Synthetic cloud fields, generated by a cascade model, reproduce the scaling behavior, as well as the cloud variability observed and estimated from cloud satellite data.
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80
Fuel temperature reactivity coefficient calculation by Monte Carlo perturbation techniques
Shim, H. J.; Kim, C. H.
2013-07-01
We present an efficient method to estimate the fuel temperature reactivity coefficient (FTC) by the Monte Carlo adjoint-weighted correlated sampling method. In this method, a fuel temperature change is regarded as variations of the microscopic cross sections and the temperature in the free gas model which is adopted to correct the asymptotic double differential scattering kernel. The effectiveness of the new method is examined through the continuous energy MC neutronics calculations for PWR pin cell problems. The isotope-wise and reaction-type-wise contributions to the FTCs are investigated for two free gas models - the constant scattering cross section model and the exact model. It is shown that the proposed method can efficiently predict the reactivity change due to the fuel temperature variation. (authors)
Monte Carlo prompt dose calculations for the National Ingition Facility
Latkowski, J.F.; Phillips, T.W.
1997-01-01
During peak operation, the National Ignition Facility (NIF) will conduct as many as 600 experiments per year and attain deuterium- tritium fusion yields as high as 1200 MJ/yr. The radiation effective dose equivalent (EDE) to workers is limited to an average of 03 mSv/yr (30 mrem/yr) in occupied areas of the facility. Laboratory personnel determined located outside the facility will receive EDEs <= 0.5 mSv/yr (<= 50 mrem/yr). The total annual occupational EDE for the facility will be maintained at <= 0.1 person-Sv/yr (<= 10 person- rem/yr). To ensure that prompt EDEs meet these limits, three- dimensional Monte Carlo calculations have been completed.
Quantum Monte Carlo simulation of spin-polarized H
Markic, L. Vranjes; Boronat, J.; Casulleras, J.
2007-02-01
The ground-state properties of spin polarized hydrogen H{down_arrow} are obtained by means of diffusion Monte Carlo calculations. Using the most accurate to date ab initio H{down_arrow}-H{down_arrow} interatomic potential we have studied its gas phase, from the very dilute regime until densities above its freezing point. At very small densities, the equation of state of the gas is very well described in terms of the gas parameter {rho}a{sup 3}, with a the s-wave scattering length. The solid phase has also been studied up to high pressures. The gas-solid phase transition occurs at a pressure of 173 bar, a much higher value than suggested by previous approximate descriptions.
Peelle's pertinent puzzle using the Monte Carlo technique
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas; Pan, Feng
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, and if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
Improved version of the PHOBOS Glauber Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
“Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more » Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less
Monte-Carlo Continuous Energy Burnup Code System.
Energy Science and Technology Software Center (OSTI)
2007-08-31
Version 00 MCB is a Monte Carlo Continuous Energy Burnup Code for a general-purpose use to calculate a nuclide density time evolution with burnup or decay. It includes eigenvalue calculations of critical and subcritical systems as well as neutron transport calculations in fixed source mode or k-code mode to obtain reaction rates and energy deposition that are necessary for burnup calculations. The MCB-1C patch file and data packages as distributed by the NEADB are verymore » well organized and are being made available through RSICC as received. The RSICC package includes the MCB-1C patch and MCB data libraries. Installation of MCB requires MCNP4C source code and utility programs, which are not included in this MCB distribution. They were provided with the now obsolete CCC-700/MCNP-4C package.« less
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Nichols, T.; Sternat, M.; Charlton, W.
2011-05-08
MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup } EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup } radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. All phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.
Geometrically nonlinear behaviour of spot welded joints in tensile and compressive shear loading
Radaj, D.; Zhang, S.
1995-05-01
The geometrically nonlinear behavior of spot welded joints including buckling and gap closure and its influence on local stress parameters at the weld spot edge (structural stresses, notch stress or fatigue notch factor, stress intensity factors) are determined by a large displacement analysis of the tensile shear specimen subjected to tensile and compressive loading. The local parameters mentioned are considered decisive for fatigue crack initiation. A continuous beam model and a plate strip model are used within a simplified procedure. A more sophisticated finite element model is applied on the specimens thereafter. The nonlinear effect is small for steel plates more than 1 mm thick in the medium and high cycle fatigue range of tensile loading. It may be stronger for compressive loading but is at least partially compensated for in this case by the gap closure effect. 4 refs.
Projectile containing metastable intermolecular composites and spot fire method of use
Asay, Blaine W.; Son, Steven F.; Sanders, V. Eric; Foley, Timothy; Novak, Alan M.; Busse, James R.
2012-07-31
A method for altering the course of a conflagration involving firing a projectile comprising a powder mixture of oxidant powder and nanosized reductant powder at velocity sufficient for a violent reaction between the oxidant powder and the nanosized reductant powder upon impact of the projectile, and causing impact of the projectile at a location chosen to draw a main fire to a spot fire at such location and thereby change the course of the conflagration, whereby the air near the chosen location is heated to a temperature sufficient to cause a spot fire at such location. The invention also includes a projectile useful for such method and said mixture preferably comprises a metastable intermolecular composite.
Vertically-tapered optical waveguide and optical spot transformer formed therefrom
Bakke, Thor; Sullivan, Charles T.
2004-07-27
An optical waveguide is disclosed in which a section of the waveguide core is vertically tapered during formation by spin coating by controlling the width of an underlying mesa structure. The optical waveguide can be formed from spin-coatable materials such as polymers, sol-gels and spin-on glasses. The vertically-tapered waveguide section can be used to provide a vertical expansion of an optical mode of light within the optical waveguide. A laterally-tapered section can be added adjacent to the vertically-tapered section to provide for a lateral expansion of the optical mode, thereby forming an optical spot-size transformer for efficient coupling of light between the optical waveguide and a single-mode optical fiber. Such a spot-size transformer can also be added to a III-V semiconductor device by post processing.
Influence of hot spot features on the initiation characteristics of heterogeneous nitromethane
Dattelbaum, Dana M; Sheffield, Stephen A; Stahl, David B; Dattelbaum, Andrew M; Engelke, Ray
2010-01-01
To gain insights into the critical hot spot features influencing energetic materials initiation characteristics, well-defined micron-scale particles have been intentionally introduced into the homogeneous explosive nitromethane (NM). Two types of potential hot spot origins have been examined - shock impedance mismatches using solid silica beads, and porosity using hollow microballoons - as well as their sizes and inter-particle separations. Here, we present the results of several series of gas gun-driven plate impact experiments on NM/particle mixtures with well-controlled shock inputs. Detailed insights into the nature of the reactive flow during the build-up to detonation have been obtained from the response of in-situ electromagnetic gauges, and the data have been used to establish Pop-plots (run-distance-to-detonation vs. shock input pressure) for the mixtures. Comparisons of sensitization effects and energy release characteristics relative to the initial shock front between the solid and hollow beads are presented.
Macrophyte mapping in ten lakes of South Carolina with multispectral SPOT HRV data
Mackey, H.E. Jr.
1989-01-01
Fall and spring multispectral SPOT HRV data for 1987 and 1988 were used to evaluate the macrophyte distributions in ten freshwater reservoirs of South Carolina. The types of macrophyte and wetland communities present along the shoreline of the lakes varied depending on the age, water level fluctuations, water quality, and basin morphology. Seasonal satellite data were important for evaluation of the extent of persistent versus non-persistent macrophyte communities in the lakes. This paper contains only the view graphs of this process.
X marks the spot: Researchers confirm novel method for controlling plasma
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
rotation to improve fusion performance | Princeton Plasma Physics Lab X marks the spot: Researchers confirm novel method for controlling plasma rotation to improve fusion performance By Raphael Rosen June 23, 2015 Tweet Widget Google Plus One Share on Facebook Representative plasma geometries, with the X-point location circled in red / Reprinted from T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001 (2015). Copyright 2015 by the American Physical Society. Representative plasma
X marks the spot: Researchers confirm novel method for controlling plasma
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
rotation to improve fusion performance | Princeton Plasma Physics Lab X marks the spot: Researchers confirm novel method for controlling plasma rotation to improve fusion performance By Raphael Rosen June 23, 2015 Tweet Widget Google Plus One Share on Facebook Representative plasma geometries, with the X-point location circled in red / Reprinted from T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001 (2015). Copyright 2015 by the American Physical Society. Representative plasma
Spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB
Harris, Betty W.
1986-01-01
A simple, sensitive and specific spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB, is described. Upon the application of the composition of matter of the present invention to samples containing in excess of 0.1 mg of this explosive, a bright orange color results. Interfering species such as TNT and Tetryl can be removed by first treating the sample with a solvent which does not dissolve much of the TATB, but readily dissolves these explosives.
Spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB
Harris, B.W.
1984-11-29
A simple, sensitive and specific spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB, is described. Upon the application of the composition of matter of the subject invention to samples containing in excess of 0.1 mg of this explosive, a bright orange color results. Interfering species such as TNT and Tetryl can be removed by first treating the sample with a solvent which does not dissolve the TATB, but readily dissolves these interfering explosives.
SU-E-T-73: Commissioning of a Treatment Planning System for Proton Spot Scanning
Saini, J; Kang, Y; Schultz, L; Nicewonger, D; Herrera, M; Wong, T; Bowen, S; Bloch, C
2014-06-01
Purpose: A treatment planning system (TPS) was commissioned for clinical use with a fixed beam line proton delivery system. An outline of the data collection, modeling, and verification is provided. Methods: Beam data modeling for proton spot scanning in CMS Xio TPS requires the following measurements: (i) integral depth dose curves (IDDCs); (ii) absolute dose calibration; and (iii) beam spot characteristics. The IDDCs for 18 proton energies were measured using an integrating detector in a single spot field in a water phantom. Absolute scaling of the IDDCs were performed based on ion chamber measurements in mono-energetic 1010 cm{sup 2} fields in water. Beam spot shapes were measured in air using a flat panel scintillator detector at multiple planes. For beam model verification, more than 45 uniform dose phantom and patient plans were generated. These plans were used to measure range, point dose, and longitudinal and lateral profiles. Tolerances employed for verification are: point dose and longitudinal profiles, 2%; range, 1 mm; FWHM for lateral profiles, 2 mm; and patient plan dose distribution, gamma index of >90% at 3%/3 mm criteria. Results: More than 97% of the point dose measurements out of 115 were within +/-2% with maximum deviation of 3%. 98% of the ranges measured were within 1 mm with maximum deviation of 1.4mm. The normalized depth doses were within 2% at all depths. The maximum error in FWHM of lateral profiles was found to be less than 2mm. For 5 patient plans representing different anatomic sites, a total of 38 planes for 12 beams were analyzed for gamma index with average value of 99% and minimum of 94%. Conclusions: The planning system is successfully commissioned and can be safely deployed for clinical use. Measurements of IDDCs on user beam are highly recommended instead of using standard beam IDDCs.
Students do cool summer research projects in one of the hottest spots |
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Princeton Plasma Physics Lab Students do cool summer research projects in one of the hottest spots August 4, 2016 Tweet Widget Google Plus One Share on Facebook Priya Jaglal (Photo by Elle Starkman/PPPL Office of Communications) Priya Jaglal More than 40 college students pursuing careers in physics, engineering and computer science are spending their summer at the U.S. Department of Energy's Princeton Plasma Physics Laboratory working with scientists and engineers on hands-on research
Eutectic structures in friction spot welding joint of aluminum alloy to copper
Shen, Junjun Suhuddin, Uceu F. H.; Cardillo, Maria E. B.; Santos, Jorge F. dos
2014-05-12
A dissimilar joint of AA5083 Al alloy and copper was produced by friction spot welding. The Al-MgCuAl{sub 2} eutectic in both coupled and divorced manners were found in the weld. At a relatively high temperature, mass transport of Cu due to plastic deformation, material flow, and atomic diffusion, combined with the alloy system of AA5083 are responsible for the ternary eutectic melting.
Friction Stir Spot Welding (FSSW) of Advanced High Strength Steel (AHSS)
Santella, M. L.; Hovanski, Yuri; Pan, Tsung-Yu
2012-04-16
Friction stir spot welding (FSSW) is applied to join advanced high strength steels (AHSS): galvannealed dual phase 780 MPa steel (DP780GA), transformation induced plasticity 780 MPa steel (TRIP780), and hot-stamped boron steel (HSBS). A low-cost Si3N4 ceramic tool was developed and used for making welds in this study instead of polycrystalline cubic boron nitride (PCBN) material used in earlier studies. FSSW has the advantages of solid-state, low-temperature process, and the ability of joining dissimilar grade of steels and thicknesses. Two different tool shoulder geometries, concave with smooth surface and convex with spiral pattern, were used in the study. Welds were made by a 2-step displacement control process with weld time of 4, 6, and 10 seconds. Static tensile lap-shear strength achieved 16.4 kN for DP780GA-HSBS and 13.2kN for TRIP780-HSBS, above the spot weld strength requirements by AWS. Nugget pull-out was the failure mode of the joint. The joining mechanism was illustrated from the cross-section micrographs. Microhardness measurement showed hardening in the upper sheet steel (DP780GA or TRIP780) in the weld, but softening of HSBS in the heat-affect zone (HAZ). The study demonstrated the feasibility of making high-strength AHSS spot welds with low-cost tools.
Quantum Monte Carlo for electronic structure: Recent developments and applications
Rodriquez, M. M.S.
1995-04-01
Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Larry Engelhardt
2006-08-09
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
New Mexico Supercomputing Challenge 5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James took the top prize for their research project April 21, 2015 Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Contact Los Alamos National Laboratory Steve Sandoval
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Cohesion Energetics of Carbon Allotropes: Quantum Monte Carlo Study
Shin, Hyeondeok; Kang, Sinabro; Koo, Jahyun; Lee, Hoonkyung; Kim, Jeongnim; Kwon, Yongkyung
2014-01-01
We have performed quantum Monte Carlo calculations to study the cohesion energetics of carbon allotropes, including sp3-bonded diamond, sp2-bonded graphene, sp-sp2 hybridized graphynes, and sp-bonded carbyne. The comput- ed cohesive energies of diamond and graphene are found to be in excellent agreement with the corresponding values de- termined experimentally for diamond and graphite, respectively, when the zero-point energies, along with the interlayer binding in the case of graphite, are included. We have also found that the cohesive energy of graphyne decreases system- atically as the ratio of sp-bonded carbon atoms increases. The cohesive energy of -graphyne, the most energetically- stable graphyne, turns out to be 6.766(6) eV/atom, which is smaller than that of graphene by 0.698(12) eV/atom. Experi- mental difficulty in synthesizing graphynes could be explained by their significantly smaller cohesive energies. Finally we conclude that the cohesive energy of a newly-proposed two-dimensional carbon network can be accurately estimated with the carbon-carbon bond energies determined from the cohesive energies of graphene and three different graphynes.
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
Pseudopotentials for quantum Monte Carlo studies of transition metal oxides
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Krogel, Jaron T.; Santana Palacio, Juan A.; Reboredo, Fernando A.
2016-02-22
Quantum Monte Carlo (QMC) calculations of transition metal oxides are partially limited by the availability of high-quality pseudopotentials that are both accurate in QMC and compatible with major plane-wave electronic structure codes. We have generated a set of neon-core pseudopotentials with small cutoff radii for the early transition metal elements Sc to Zn within the local density approximation of density functional theory. The pseudopotentials have been directly tested for accuracy within QMC by calculating the first through fourth ionization potentials of the isolated transition metal (M) atoms and the binding curve of each M-O dimer. We find the ionization potentialsmore » to be accurate to 0.16(1) eV, on average, relative to experiment. The equilibrium bond lengths of the dimers are within 0.5(1)% of experimental values, on average, and the binding energies are also typically accurate to 0.18(3) eV. The level of accuracy we find for atoms and dimers is comparable to what has recently been observed for bulk metals and oxides using the same pseudopotentials. Our QMC pseudopotential results compare well with the findings of previous QMC studies and benchmark quantum chemical calculations.« less
High order Chin actions in path integral Monte Carlo
Sakkos, K.; Casulleras, J.; Boronat, J.
2009-05-28
High order actions proposed by Chin have been used for the first time in path integral Monte Carlo simulations. Contrary to the Takahashi-Imada action, which is accurate to the fourth order only for the trace, the Chin action is fully fourth order, with the additional advantage that the leading fourth-order error coefficients are finely tunable. By optimizing two free parameters entering in the new action, we show that the time step error dependence achieved is best fitted with a sixth order law. The computational effort per bead is increased but the total number of beads is greatly reduced and the efficiency improvement with respect to the primitive approximation is approximately a factor of 10. The Chin action is tested in a one-dimensional harmonic oscillator, a H{sub 2} drop, and bulk liquid {sup 4}He. In all cases a sixth-order law is obtained with values of the number of beads that compare well with the pair action approximation in the stringent test of superfluid {sup 4}He.
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Random Number Generation for Petascale Quantum Monte Carlo
Ashok Srinivasan
2010-03-16
The quality of random number generators can affect the results of Monte Carlo computations, especially when a large number of random numbers are consumed. Furthermore, correlations present between different random number streams in a parallel computation can further affect the results. The SPRNG software, which the author had developed earlier, has pseudo-random number generators (PRNGs) capable of producing large numbers of streams with large periods. However, they had been empirically tested on only thousand streams earlier. In the work summarized here, we tested the SPRNG generators with over a hundred thousand streams, involving over 10^14 random numbers per test, on some tests. We also tested the popular Mersenne Twister. We believe that these are the largest tests of PRNGs, both in terms of the numbers of streams tested and the number of random numbers tested. We observed defects in some of these generators, including the Mersenne Twister, while a few generators appeared to perform well. We also corrected an error in the implementation of one of the SPRNG generators.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less
Monte Carlo analysis of localization errors in magnetoencephalography
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficient as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.
The effect of laser spot shapes on polar-direct-drive implosions on the National Ignition Facility
Weilacher, F.; Radha, P. B. Collins, T. J. B.; Marozas, J. A.
2015-03-15
Ongoing polar-direct-drive (PDD) implosions on the National Ignition Facility (NIF) [J. D. Lindl and E. I. Moses, Phys. Plasmas 18, 050901 (2011)] use existing NIF hardware, including indirect-drive phase plates. This limits the performance achievable in these implosions. Spot shapes are identified that significantly improve the uniformity of PDD NIF implosions; outer surface deviation is reduced by a factor of 7 at the end of the laser pulse and hot-spot distortion is reduced by a factor of 2 when the shell has converged by a factor of ?10. As a result, the neutron yield increases by approximately a factor of 2. This set of laser spot shapes is a combination of circular and elliptical spots, along with elliptical spot shapes modulated by an additional higher-intensity ellipse offset from the center of the beam. This combination is motivated in this paper. It is also found that this improved implosion uniformity is obtained independent of the heat conduction model. This work indicates that significant improvement in performance can be obtained robustly with the proposed spot shapes.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D.
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
On-the-Fly Doppler Broadening for Monte Carlo Codes (Journal...
Office of Scientific and Technical Information (OSTI)
Title: On-the-Fly Doppler Broadening for Monte Carlo Codes Authors: Yesilyurt, G. ; Martin, W. ; Brown, F. 1 ; Univ. of Michigan) 2 ; Los Alamos National Laboratory) 2 + Show ...
MODELING OF HIGH SPEED FRICTION STIR SPOT WELDING USING A LAGRANGIAN FINITE ELEMENT APPROACH
Miles, Michael; Karki, U.; Woodward, C.; Hovanski, Yuri
2013-09-03
Friction stir spot welding (FSSW) has been shown to be capable of joining steels of very high strength, while also being very flexible in terms of controlling the heat of welding and the resulting microstructure of the joint. This makes FSSW a potential alternative to resistance spot welding (RSW) if tool life is sufficiently high, and if machine spindle loads are sufficiently low so that the process can be implemented on an industrial robot. Robots for spot welding can typically sustain vertical loads of about 8kN, but FSSW at tool speeds of less than 3000 rpm cause loads that are too high, in the range of 11-14 kN. Therefore, in the current work tool speeds of 3000 rpm and higher were employed, in order to generate heat more quickly and to reduce welding loads to acceptable levels. The FSSW process was modeled using a finite element approach with the Forge® software package. An updated Lagrangian scheme with explicit time integration was employed to model the flow of the sheet material, subjected to boundary conditions of a rotating tool and a fixed backing plate [3]. The modeling approach can be described as two-dimensional, axisymmetric, but with an aspect of three dimensions in terms of thermal boundary conditions. Material flow was calculated from a velocity field which was two dimensional, but heat generated by friction was computed using a virtual rotational velocity component from the tool surface. An isotropic, viscoplastic Norton-Hoff law was used to model the evolution of material flow stress as a function of strain, strain rate, and temperature. The model predicted welding temperatures and the movement of the joint interface with reasonable accuracy for the welding of a dual phase 980 steel.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Numerical studies of third-harmonic generation in laser filament in air perturbed by plasma spot
Feng Liubin; Lu Xin; Liu Xiaolong; Li Yutong; Chen Liming; Ma Jinglong; Dong Quanli; Wang Weimin; Xi Tingting; Sheng Zhengming; Zhang Jie; He Duanwei
2012-07-15
Third-harmonic emission from laser filament intercepted by plasma spot is studied by numerical simulations. Significant enhancement of the third-harmonic generation is obtained due to the disturbance of the additional plasma. The contribution of the pure plasma effect and the possible plasma-enhanced third-order susceptibility on the third-harmonic generation enhancement are compared. It is shown that the plasma induced cancellation of destructive interference [Y. Liu et al., Opt. Commun. 284, 4706 (2011)] of two-colored filament is the dominant mechanism of the enhancement of third-harmonic generation.
Utility of Monte Carlo Modelling for Holdup Measurements.
Belian, Anthony P.; Russo, P. A.; Weier, Dennis R. ,
2005-01-01
Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well
MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES
Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Grard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Lige, Lige (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Lige (Belgium); Hutsemkers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Lige (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Lige (Belgium)
2015-01-01
Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630nm) and green (557.7nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Khodabakhshi, F.; Kazeminezhad, M., E-mail: mkazemi@sharif.edu; Kokabi, A.H.
2012-07-15
Constrained groove pressing as a severe plastic deformation method is utilized to produce ultra-fine grained low carbon steel sheets. The ultra-fine grained sheets are joined via resistance spot welding process and the characteristics of spot welds are investigated. Resistance spot welding process is optimized for welding of the sheets with different severe deformations and their results are compared with those of as-received samples. The effects of failure mode and expulsion on the performance of ultra-fine grained sheet spot welds have been investigated in the present paper and the welding current and time of resistance spot welding process according to these subjects are optimized. Failure mode and failure load obtained in tensile-shear test, microhardness, X-ray diffraction, transmission electron microscope and scanning electron microscope images have been used to describe the performance of spot welds. The region between interfacial to pullout mode transition and expulsion limit is defined as the optimum welding condition. The results show that optimum welding parameters (welding current and welding time) for ultra-fine grained sheets are shifted to lower values with respect to those for as-received specimens. In ultra-fine grained sheets, one new region is formed named recrystallized zone in addition to fusion zone, heat affected zone and base metal. It is shown that microstructures of different zones in ultra-fine grained sheets are finer than those of as-received sheets. - Highlights: Black-Right-Pointing-Pointer Resistance spot welding process is optimized for joining of UFG steel sheets. Black-Right-Pointing-Pointer Optimum welding current and time are decreased with increasing the CGP pass number. Black-Right-Pointing-Pointer Microhardness at BM, HAZ, FZ and recrystallized zone is enhanced due to CGP.
Monitoring seasonal and annual wetland changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1989-12-31
Eleven dates of SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data for 1987, 1988, and 1989 were evaluated to determine seasonal and annual changes in a 400-hectare, southeastern freshwater marsh. Early April through mid-May was the best time to discriminate among the cypress (Taxodium distichum)/water tupelo (Nyssa acquatica) swamp forest and the non-persistent (Ludwigia spp.) and persistent (Typha spp.) stands in this wetlands. Furthermore, a ten-fold decrease in flow rate from 11 cubic meters per sec (cms) in 1987 to one cms in 1988 was recorded in the marsh followed by a shift to drier wetland communities. The Savannah River Site (SRS), maintained by the US Department of Energy, is a 777 km{sup 2} area located in south central South Carolina. Five tributaries of the Savannah River run southwest through the SRS and into the floodplain swamp of the Savannah River. This paper describes the use of SPOT HRV data to monitor seasonal and annual trends in one of these swamp deltas, Pen Branch Delta, during a three-year period, 1987--1989.
Monitoring seasonal and annual wetland changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1989-01-01
Eleven dates of SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data for 1987, 1988, and 1989 were evaluated to determine seasonal and annual changes in a 400-hectare, southeastern freshwater marsh. Early April through mid-May was the best time to discriminate among the cypress (Taxodium distichum)/water tupelo (Nyssa acquatica) swamp forest and the non-persistent (Ludwigia spp.) and persistent (Typha spp.) stands in this wetlands. Furthermore, a ten-fold decrease in flow rate from 11 cubic meters per sec (cms) in 1987 to one cms in 1988 was recorded in the marsh followed by a shift to drier wetland communities. The Savannah River Site (SRS), maintained by the US Department of Energy, is a 777 km{sup 2} area located in south central South Carolina. Five tributaries of the Savannah River run southwest through the SRS and into the floodplain swamp of the Savannah River. This paper describes the use of SPOT HRV data to monitor seasonal and annual trends in one of these swamp deltas, Pen Branch Delta, during a three-year period, 1987--1989.
Sensitivity of inertial confinement fusion hot spot properties to the deuterium-tritium fuel adiabat
Melvin, J.; Lim, H.; Rana, V.; Glimm, J.; Cheng, B.; Sharp, D. H.; Wilson, D. C.
2015-02-15
We determine the dependence of key Inertial Confinement Fusion (ICF) hot spot simulation properties on the deuterium-tritium fuel adiabat, here modified by addition of energy to the cold shell. Variation of this parameter reduces the simulation to experiment discrepancy in some, but not all, experimentally inferred quantities. Using simulations with radiation drives tuned to match experimental shots N120321 and N120405 from the National Ignition Campaign (NIC), we carry out sets of simulations with varying amounts of added entropy and examine the sensitivities of important experimental quantities. Neutron yields, burn widths, hot spot densities, and pressures follow a trend approaching their experimentally inferred quantities. Ion temperatures and areal densities are sensitive to the adiabat changes, but do not necessarily converge to their experimental quantities with the added entropy. This suggests that a modification to the simulation adiabat is one of, but not the only explanation of the observed simulation to experiment discrepancies. In addition, we use a theoretical model to predict 3D mix and observe a slight trend toward less mixing as the entropy is enhanced. Instantaneous quantities are assessed at the time of maximum neutron production, determined dynamically within each simulation. These trends contribute to ICF science, as an effort to understand the NIC simulation to experiment discrepancy, and in their relation to the high foot experiments, which features a higher adiabat in the experimental design and an improved neutron yield in the experimental results.
Joint strength in high speed friction stir spot welded DP 980 steel
Saunders, Nathan; Miles, Michael; Hartman, Trent; Hovanski, Yuri; Hong, Sung Tae; Steel, Russell
2014-05-01
High speed friction stir spot welding was applied to 1.2 mm thick DP 980 steel sheets under different welding conditions, using PCBN tools. The range of vertical feed rates used during welding was 2.5 mm 102 mm per minute, while the range of spindle speeds was 2500 6000 rpm. Extended testing was carried out for five different sets of welding conditions, until tool failure. These welding conditions resulted in vertical welding loads of 3.6 8.2 kN and lap shear tension failure loads of 8.9 11.1 kN. PCBN tools were shown, in the best case, to provide lap shear tension fracture loads at or above 9 kN for 900 spot welds, after which tool failure caused a rapid drop in joint strength. Joint strength was shown to be strongly correlated to bond area, which was measured from weld cross sections. Failure modes of the tested joints were a function of bond area and softening that occurred in the heat-affected zone.
Wear testing of friction stir spot welding tools for joining of DP 980 Steel
Ridges, Chris; Miles, Michael; Hovanski, Yuri; Peterson, Jeremy; Steel, Russell
2011-06-06
Friction stir spot welding has been shown to be a viable method of joining ultra high strength steel (UHSS), both in terms of joint strength and process cycle time. However, the cost of tooling must be reasonable in order for this method to be adopted as an industrial process. Several tooling materials have been evaluated in prior studies, including silicon nitride and polycrystalline cubic boron nitride (PCBN). Recently a new tool alloy has been developed, where a blend of PCBN and tungsten rhenium (W-Re) was used in order to improve the toughness of the tool. Wear testing results are presented for two of these alloys: one with a composition of 60% PCBN and 40% W-Re (designated as Q60), and one with 70% PCBN and 30% W-Re (designated at Q70). The sheet material used for all wear testing was DP 980. Tool profiles were measured periodically during the testing process in order to show the progression of wear as a function of the number of spots produced. Lap shear testing was done each time a tool profile was taken in order to show the relationship between tool wear and joint strength. For the welding parameters chosen for this study the Q70 tool provided the best combination of wear resistance and joint strength.
SMART II : the spot market agent research tool version 2.0.
North, M. J. N.
2000-12-14
Argonne National Laboratory (ANL) has worked closely with Western Area Power Administration (Western) over many years to develop a variety of electric power marketing and transmission system models that are being used for ongoing system planning and operation as well as analytic studies. Western markets and delivers reliable, cost-based electric power from 56 power plants to millions of consumers in 15 states. The Spot Market Agent Research Tool Version 2.0 (SMART II) is an investigative system that partially implements some important components of several existing ANL linear programming models, including some used by Western. SMART II does not implement a complete model of the Western utility system but it does include several salient features of this network for exploratory purposes. SMART II uses a Swarm agent-based framework. SMART II agents model bulk electric power transaction dynamics with recognition for marginal costs as well as transmission and generation constraints. SMART II uses a sparse graph of nodes and links to model the electric power spot market. The nodes represent power generators and consumers with distinct marginal decision curves and varying investment capital as well individual learning parameters. The links represent transmission lines with individual capacities taken from a range of central distribution, outlying distribution and feeder line types. The application of SMART II to electric power systems studies has produced useful results different from those often found using more traditional techniques. Use of the advanced features offered by the Swarm modeling environment simplified the creation of the SMART II model.
DEFINING THE 'BLIND SPOT' OF HINODE EIS AND XRT TEMPERATURE MEASUREMENTS
Winebarger, Amy R.; Cirtain, Jonathan; Mulu-Moore, Fana [NASA Marshall Space Flight Center, VP 62, Huntsville, AL 35812 (United States); Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States); Schmelz, Joan T. [Physics Department, University of Memphis, Memphis, TN 38152 (United States); Golub, Leon [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Kobayashi, Ken, E-mail: amy.r.winebarger@nasa.gov [Center for Space Plasma and Aeronomic Research, 320 Sparkman Dr, Huntsville, AL 35805 (United States)
2012-02-20
Observing high-temperature, low emission measure plasma is key to unlocking the coronal heating problem. With current instrumentation, a combination of EUV spectral data from Hinode Extreme-ultraviolet Imaging Spectrometer (EIS; sensitive to temperatures up to 4 MK) and broadband filter data from Hinode X-ray Telescope (XRT; sensitive to higher temperatures) is typically used to diagnose the temperature structure of the observed plasma. In this Letter, we demonstrate that a 'blind spot' exists in temperature-emission measure space for combined Hinode EIS and XRT observations. For a typical active region core with significant emission at 3-4 MK, Hinode EIS and XRT are insensitive to plasma with temperatures greater than {approx}6 MK and emission measures less than {approx}10{sup 27} cm{sup -5}. We then demonstrate that the temperature and emission measure limits of this blind spot depend upon the temperature distribution of the plasma along the line of sight by considering a hypothetical emission measure distribution sharply peaked at 1 MK. For this emission measure distribution, we find that EIS and XRT are insensitive to plasma with emission measures less than {approx}10{sup 26} cm{sup -5}. We suggest that a spatially and spectrally resolved 6-24 Angstrom-Sign spectrum would improve the sensitivity to these high-temperature, low emission measure plasma.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2006-04-28
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS). DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. The critical fusion zone sizes to ensure nugget pull-out failure mode are developed for both DP800 and TRIP800 using the limit load based analytical model and the micro-hardness measurements of the weld cross sections. Static weld strength tests using cross tension samples were performed on the joint populations with controlled fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied using statistical data analysis tools. The results in this study show that the conventional weld size of 4 t can not produce nugget pullout mode for both the DP800 and TRIP800 materials. The results also suggest that performance based spot weld acceptance criteria should be developed for different AHSS spot welds.
Multiparticle Monte Carlo Code System for Shielding and Criticality Use.
Energy Science and Technology Software Center (OSTI)
2015-06-01
Version 00 COG is a modern, full-featured Monte Carlo radiation transport code that provides accurate answers to complex shielding, criticality, and activation problems.COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computingmore » Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://cog.llnl.gov. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. A lattice feature simplifies the specification of regular arrays of parts. Parallel processing under MPI is supported for multi-CPU systems. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, pathlength stretching, point detectors, scattered direction biasing, and forced collisions. Criticality For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems COG can solve coupled problems involving neutrons, photons, and electrons. COG 11.1 is an updated version of COG11.1 BETA 2 (RSICC C00777MNYCP02
Cranmer-Sargison, G.; Weston, S.; Evans, J. A.; Sidhu, N. P.; Thwaites, D. I.
2011-12-15
Purpose: The goal of this work was to implement a recently proposed small field dosimetry formalism [Alfonso et al., Med. Phys. 35(12), 5179-5186 (2008)] for a comprehensive set of diode detectors and provide the required Monte Carlo generated factors to correct measurement. Methods: Jaw collimated square small field sizes of side 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, and 3.0 cm normalized to a reference field of 5.0 cm x 5.0 cm were used throughout this study. Initial linac modeling was performed with electron source parameters at 6.0, 6.1, and 6.2 MeV with the Gaussian FWHM decreased in steps of 0.010 cm from 0.150 to 0.100 cm. DOSRZnrc was used to develop models of the IBA stereotactic field diode (SFD) as well as the PTW T60008, T60012, T60016, and T60017 field diodes. Simulations were run and isocentric, detector specific, output ratios (OR{sub det}) calculated at depths of 1.5, 5.0, and 10.0 cm. This was performed using the following source parameter subset: 6.1 and 6.2 MeV with a FWHM = 0.100, 0.110, and 0.120 cm. The source parameters were finalized by comparing experimental detector specific output ratios with simulation. Simulations were then run with the active volume and surrounding materials set to water and the replacement correction factors calculated according to the newly proposed formalism. Results: In all cases, the experimental field size widths (at the 50% level) were found to be smaller than the nominal, and therefore, the simulated field sizes were adjusted accordingly. At a FWHM = 0.150 cm simulation produced penumbral widths that were too broad. The fit improved as the FWHM was decreased, yet for all but the smallest field size worsened again at a FWHM = 0.100 cm. The simulated OR{sub det} were found to be greater than, equivalent to and less than experiment for spot size FWHM = 0.100, 0.110, and 0.120 cm, respectively. This is due to the change in source occlusion as a function of FWHM and field size. The corrections required for the 0.5 cm field
Nguyen, Vanthan; Yan, Lihe Si, Jinhai; Hou, Xun
2015-02-28
Photoluminescent carbon nanodots (C-dots) with size tunability and uniformity were fabricated in polyethylene glycol (PEG{sub 200N}) solution using femtosecond laser ablation method. The size distributions and photoluminescence (PL) properties of C-dots are well controlled by adjusting the combined parameters of laser fluence, spot size, and irradiation time. The size reduction efficiency of the C-dots progressively increases with decreasing laser fluence and spot size. The optimal PL spectra are red-shifted and the quantum yields decrease with the increase in C-dots size, which could be attributed to the more complex surface functional groups attached on C-dots induced at higher laser fluence and larger spot size. Moreover, an increase in irradiation time leads to a decrease in size of C-dots, but long-time irradiation will result in the generation of complex functional groups on C-dots, subsequently the PL spectra are red-shifted.
Hyer, Daniel E. Hill, Patrick M.; Wang, Dongxu; Smith, Blake R.; Flynn, Ryan T.
2014-09-15
Purpose: In the absence of a collimation system the lateral penumbra of spot scanning (SS) dose distributions delivered by low energy proton beams is highly dependent on the spot size. For current commercial equipment, spot size increases with decreasing proton energy thereby reducing the benefit of the SS technique. This paper presents a dynamic collimation system (DCS) for sharpening the lateral penumbra of proton therapy dose distributions delivered by SS. Methods: The collimation system presented here exploits the property that a proton pencil beam used for SS requires collimation only when it is near the target edge, enabling the use of trimmers that are in motion at times when the pencil beam is away from the target edge. The device consists of two pairs of parallel nickel trimmer blades of 2 cm thickness and dimensions of 2 cm × 18 cm in the beam's eye view. The two pairs of trimmer blades are rotated 90° relative to each other to form a rectangular shape. Each trimmer blade is capable of rapid motion in the direction perpendicular to the central beam axis by means of a linear motor, with maximum velocity and acceleration of 2.5 m/s and 19.6 m/s{sup 2}, respectively. The blades travel on curved tracks to match the divergence of the proton source. An algorithm for selecting blade positions is developed to minimize the dose delivered outside of the target, and treatment plans are created both with and without the DCS. Results: The snout of the DCS has outer dimensions of 22.6 × 22.6 cm{sup 2} and is capable of delivering a minimum treatment field size of 15 × 15 cm{sup 2}. Using currently available components, the constructed system would weigh less than 20 kg. For irregularly shaped fields, the use of the DCS reduces the mean dose outside of a 2D target of 46.6 cm{sup 2} by approximately 40% as compared to an identical plan without collimation. The use of the DCS increased treatment time by 1–3 s per energy layer. Conclusions: The spread of the
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Komarov, V M; Charukhchev, A V; Andreev, A A; Platonov, K Yu
2014-12-31
We have investigated the effect of the laser spot shape on the spatial distribution of accelerated ions on the front and back sides of a thin target irradiated by a picosecond laser pulse having the intensity of (3 – 4) × 10{sup 18} W cm{sup -2}. Experimental data are compared with numerical calculations. It is shown that the spatial structure of the ion bunch on the front side of the target resembles the laser spot structure rotated by 90°. (interaction of laser radiation with matter. laser plasma)
Reciprocal space mapping by spot profile analyzing low energy electron diffraction
Meyer zu Heringdorf, Frank-J.; Horn-von Hoegen, Michael
2005-08-15
We present an experimental approach for the recording of two-dimensional reciprocal space maps using spot profile analyzing low energy electron diffraction (SPA-LEED). A specialized alignment procedure eliminates the shifting of LEED patterns on the screen which is commonly observed upon variation of the electron energy. After the alignment, a set of one-dimensional sections through the diffraction pattern is recorded at different energies. A freely available software tool is used to assemble the sections into a reciprocal space map. The necessary modifications of the Burr-Brown computer interface of the two Leybold and Omicron type SPA-LEED instruments are discussed and step-by-step instructions are given to adapt the SPA 4.1d software to the changed hardware. Au induced faceting of 4 deg. vicinal Si(001) is used as an example to demonstrate the technique.
Kury, P.; Zahl, P.; Horn-von Hoegen, M.; Voges, C.; Frischat, H.; Guenter, H.-L.; Pfnuer, H.; Henzler, M.
2004-11-01
Spot profile analysis low energy electron diffraction (SPA-LEED) is one of the most versatile and powerful methods for the determination of the structure and morphology of surfaces even at elevated temperatures. In setups where the sample is heated directly by an electric current, the resolution of the diffraction images at higher temperatures can be heavily degraded due to the inhomogeneous electric and magnetic fields around the sample. Here we present an easily applicable modification of the common data acquisition hardware of the SPA-LEED, which enables the system to work in a pulsed heating mode: Instead of heating the sample with a constant current, a square wave is used and electron counting is only performed when the current through the sample vanishes. Thus, undistorted diffration images can be acquired at high temperatures.
On the development of nugget growth model for resistance spot welding
Zhou, Kang, E-mail: zhoukang326@126.com, E-mail: melcai@ust.hk; Cai, Lilong, E-mail: zhoukang326@126.com, E-mail: melcai@ust.hk [Department of Mechanical and Aerospace Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong)
2014-04-28
In this paper, we developed a general mathematical model to estimate the nugget growth process based on the heat energy delivered into the welds by the resistance spot welding. According to the principles of thermodynamics and heat transfer, and the effect of electrode force during the welding process, the shape of the nugget can be estimated. Then, a mathematical model between heat energy absorbed and nugget diameter can be obtained theoretically. It is shown in this paper that the nugget diameter can be precisely described by piecewise fractal polynomial functions. Experiments were conducted with different welding operation conditions, such as welding currents, workpiece thickness, and widths, to validate the model and the theoretical analysis. All the experiments confirmed that the proposed model can predict the nugget diameters with high accuracy based on the input heat energy to the welds.
Impact of tool wear on joint strength in friction stir spot welding of DP 980 steel
Miles, Michael; Ridges, Chris; Hovanski, Yuri; Peterson, Jeremy; Santella, M. L.; Steel, Russel
2011-09-14
Friction stir spot welding has been shown to be a viable method of joining ultra high strength steel (UHSS), both in terms of joint strength and process cycle time. However, the cost of tooling must be reasonable in order for this method to be adopted as an industrial process. Recently a new tool alloy has been developed, using a blend of PCBN and tungsten rhenium (W-Re) in order to improve the toughness of the tool. Wear testing results are presented for two of these alloys: one with a composition of 60% PCBN and 40% W-Re, and one with 70% PCBN and 30% W-Re. The sheet material used for all wear testing was 1.4 mm DP 980. Lap shear testing was used to show the relationship between tool wear and joint strength. The Q70 tool provided the best combination of wear resistance and joint strength.
Electric rate that shifts hourly may foretell spot-market kWh
Springer, N.
1985-11-25
Four California industrial plants have cut their electricity bills up to 16% by shifting from the traditional time-of-use rates to an experimental real-time program (RTP) that varies prices hourly. The users receive a price schedule reflecting changing generating costs one day in advance to encourage them to increase power consumption during the cheapest time periods. Savings during the pilot program range between $11,000 and $32,000 per customer. The hourly cost breakdown encourages consumption during the night and early morning. The signalling system could be expanded to cogenerators and independent small power producers. If an electricity spot market develops, forecasters think a place on the stock exchanges for future-delivery contracts could develop in the future.
Ultrasonic Spot Welding of AZ31B to Galvanized Mild Steel
Pan, Dr. Tsung-Yu; Franklin, Teresa; Pan, Professor Jwo; Brown, Elliot; Santella, Michael L
2010-01-01
Ultrasonic spot welds were made between sheets of 0.8-mm-thick hot-dip-galvanized mild steel and 1.6-mm-thick AZ31B-H24. Lap-shear strengths of 3.0-4.2 kN were achieved with weld times of 0.3-1.2 s. Failure to achieve strong bonding of joints where the Zn coating was removed from the steel surface indicate that Zn is essential to the bonding mechanism. Microstructure characterization and microchemical analysis indicated temperatures at the AZ31-steel interfaces reached at least 344 C in less than 0.3 s. The elevated temperature conditions promoted annealing of the AZ31-H24 metal and chemical reactions between it and the Zn coating.
Crystal structure of Spot 14, a modulator of fatty acid synthesis
Colbert, Christopher L.; Kim, Chai-Wan; Moon, Young-Ah; Henry, Lisa; Palnitkar, Maya; McKean, William B.; Fitzgerald, Kevin; Deisenhofer, Johann; Horton, Jay D.; Kwon, Hyock Joo
2011-09-06
Spot 14 (S14) is a protein that is abundantly expressed in lipogenic tissues and is regulated in a manner similar to other enzymes involved in fatty acid synthesis. Deletion of S14 in mice decreased lipid synthesis in lactating mammary tissue, but the mechanism of S14's action is unknown. Here we present the crystal structure of S14 to 2.65 {angstrom} and biochemical data showing that S14 can form heterodimers with MIG12. MIG12 modulates fatty acid synthesis by inducing the polymerization and activity of acetyl-CoA carboxylase, the first committed enzymatic reaction in the fatty acid synthesis pathway. Coexpression of S14 and MIG12 leads to heterodimers and reduced acetyl-CoA carboxylase polymerization and activity. The structure of S14 suggests a mechanism whereby heterodimer formation with MIG12 attenuates the ability of MIG12 to activate ACC.
Landsat and SPOT data for oil exploration in North-Western China
Nishidai, Takashi
1996-07-01
Satellite remote sensing technology has been employed by Japex to provide information related to oil exploration programs for many years. Since the beginning of the 1980`s, regional geological interpretation through to advanced studies using satellite imagery with high spectral and spatial resolutions (such as Landsat TM and SPOT HRV), have been carried out, for both exploration programs and for scientific research. Advanced techniques (including analysis of airborne hyper-multispectral imaging sensor data) as well as conventional photogeological techniques were used throughout these programs. The first program using remote sensing technology in China focused on the Tarim Basin, Xinjiang Uygur Autonomous Region, and was carried out using Landsat MSS data. Landsat MSS imagery allows us to gain useful preliminary geological information about an area of interest, prior to field studies. About 90 Landsat scenes cover the entire Xinjiang Uygru Autonomous Region, this allowed us to give comprehensive overviews of 3 hydrocarbon-bearing basins (Tarim, Junggar, and Turpan-Hami) in NW China. The overviews were based on the interpretations and assessments of the satellite imagery and on a synthesis of the most up-to-date accessible geological and geophysical data as well as some field works. Pairs of stereoscopic SPOT HRV images were used to generate digital elevation data with a 40 in grid cover for part of the Tarim Basin. Topographic contour maps, created from this digital elevation data, at scales of 1:250,000 and 1:100,000 with contour intervals of 100 m and 50 m, allowed us to make precise geological interpretation, and to carry out swift and efficient geological field work. Satellite imagery was also utilized to make medium scale to large scale image maps, not only to interpret geological features but also to support field workers and seismic survey field operations.
Heat-affected zone liquation crack on resistance spot welded TWIP steels
Saha, Dulal Chandra [Department of Advanced Materials Engineering, Dong-Eui University, 995 Eomgwangno, Busanjin-gu, Busan 614-714 (Korea, Republic of); Chang, InSung [Automotive Production Development Division, Hyundai Motor Company (Korea, Republic of); Park, Yeong-Do, E-mail: ypark@deu.ac.kr [Department of Advanced Materials Engineering, Dong-Eui University, 995 Eomgwangno, Busanjin-gu, Busan 614-714 (Korea, Republic of)
2014-07-01
In this study, the heat affected zone (HAZ) liquation crack and segregation behavior of the resistance spot welded twinning induced plasticity (TWIP) steel have been reported. Cracks appeared in the post-welded joints that originated at the partially melted zone (PMZ) and propagated from the PMZ through the heat affected zone (HAZ) to the base metal (BM). The crack length and crack opening widths were observed increasing with heat input; and the welding current was identified to be the most influencing parameter for crack formation. Cracks appeared at the PMZ when nugget diameter reached at 4.50 mm or above; and the liquation cracks were found to occur along two sides of the notch tip in the sheet direction rather than in the electrode direction. Cracks were backfilled with the liquid films which has lamellar structure and supposed to be the eutectic constituent. Co-segregation of alloy elements such as, C and Mn were detected on the liquid films by electron-probe microanalysis (EPMA) line scanning and element map which suggests that the liquid film was enrich of Mn and C. The eutectic constituent was identified by analyzing the calculated phase diagram along with thermal temperature history of finite element simulation. Preliminary experimental results showed that cracks have less/no significant effect on the static cross-tensile strength (CTS) and the tensile-shear strength (TSS). In addition, possible ways to avoid cracking were discussed. - Highlights: The HAZ liquation crack during resistance spot welding of TWIP steel was examined. Cracks were completely backfilled and healed with divorced eutectic secondary phase. Co-segregation of C and Mn was detected in the cracked zone. Heat input was the most influencing factor to initiate liquation crack. Cracks have less/no significant effect on static tensile properties.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Zhang, Y; Giebeler, A; Mascia, A; Piskulich, F; Perles, L; Lepage, R; Dong, L
2014-06-01
Purpose: To quantitatively evaluate dosimetric consequence of spot size variations and validate beam-matching criteria for commissioning a pencil beam model for multiple treatment rooms. Methods: A planning study was first conducted by simulating spot size variations to systematically evaluate dosimetric impact of spot size variations in selected cases, which was used to establish the in-air spot size tolerance for beam matching specifications. A beam model in treatment planning system was created using in-air spot profiles acquired in one treatment room. These spot profiles were also acquired from another treatment room for assessing the actual spot size variations between the two treatment rooms. We created twenty five test plans with targets of different sizes at different depths, and performed dose measurement along the entrance, proximal and distal target regions. The absolute doses at those locations were measured using ionization chambers at both treatment rooms, and were compared against the calculated doses by the beam model. Fifteen additional patient plans were also measured and included in our validation. Results: The beam model is relatively insensitive to spot size variations. With an average of less than 15% measured in-air spot size variations between two treatment rooms, the average dose difference was ?0.15% with a standard deviation of 0.40% for 55 measurement points within target region; but the differences increased to 1.4%1.1% in the entrance regions, which are more affected by in-air spot size variations. Overall, our single-room based beam model in the treatment planning system agreed with measurements in both rooms < 0.5% within the target region. For fifteen patient cases, the agreement was within 1%. Conclusion: We have demonstrated that dosimetrically equivalent machines can be established when in-air spot size variations are within 15% between the two treatment rooms.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
Alcouffe, R.E.
1985-01-01
A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Energy Science and Technology Software Center (OSTI)
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
K-effective of the world: and other concerns for Monte Carlo Eigenvalue calculations
Brown, Forrest B
2010-01-01
Monte Carlo methods have been used to compute k{sub eff} and the fundamental model eigenfunction of critical systems since the 1950s. Despite the sophistication of today's Monte Carlo codes for representing realistic geometry and physics interactions, correct results can be obtained in criticality problems only if users pay attention to source convergence in the Monte Carlo iterations and to running a sufficient number of neutron histories to adequately sample all significant regions of the problem. Recommended best practices for criticality calculations are reviewed and applied to several practical problems for nuclear reactors and criticality safety, including the 'K-effective of the World' problem. Numerical results illustrate the concerns about convergence and bias. The general conclusion is that with today's high-performance computers, improved understanding of the theory, new tools for diagnosing convergence (e.g., Shannon entropy of the fission distribution), and clear practical guidance for performing calculations, practitioners will have a greater degree of confidence than ever of obtaining correct results for Monte Carlo criticality calculations.
Self-evolving atomistic kinetic Monte Carlo simulations of defects in materials
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Xu, Haixuan; Beland, Laurent K.; Stoller, Roger E.; Osetskiy, Yury N.
2015-01-29
The recent development of on-the-fly atomistic kinetic Monte Carlo methods has led to an increased amount attention on the methods and their corresponding capabilities and applications. In this review, the framework and current status of Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) are discussed. SEAKMC particularly focuses on defect interaction and evolution with atomistic details without assuming potential defect migration/interaction mechanisms and energies. The strength and limitation of using an active volume, the key concept introduced in SEAKMC, are discussed. Potential criteria for characterizing an active volume are discussed and the influence of active volume size on saddle point energies ismore » illustrated. A procedure starting with a small active volume followed by larger active volumes was found to possess higher efficiency. Applications of SEAKMC, ranging from point defect diffusion, to complex interstitial cluster evolution, to helium interaction with tungsten surfaces, are summarized. A comparison of SEAKMC with molecular dynamics and conventional object kinetic Monte Carlo is demonstrated. Overall, SEAKMC is found to be complimentary to conventional molecular dynamics, especially when the harmonic approximation of transition state theory is accurate. However it is capable of reaching longer time scales than molecular dynamics and it can be used to systematically increase the accuracy of other methods such as object kinetic Monte Carlo. Furthermore, the challenges and potential development directions are also outlined.« less
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy
Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew
2013-04-15
Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.
High explosive spot test analyses of samples from Operable Unit (OU) 1111
McRae, D.; Haywood, W.; Powell, J.; Harris, B.
1995-01-01
A preliminary evaluation has been completed of environmental contaminants at selected sites within the Group DX-10 (formally Group M-7) area. Soil samples taken from specific locations at this detonator facility were analyzed for harmful metals and screened for explosives. A sanitary outflow, a burn pit, a pentaerythritol tetranitrate (PETN) production outflow field, an active firing chamber, an inactive firing chamber, and a leach field were sampled. Energy dispersive x-ray fluorescence (EDXRF) was used to obtain semi-quantitative concentrations of metals in the soil. Two field spot-test kits for explosives were used to assess the presence of energetic materials in the soil and in items found at the areas tested. PETN is the major explosive in detonators manufactured and destroyed at Los Alamos. No measurable amounts of PETN or other explosives were detected in the soil, but items taken from the burn area and a high-energy explosive (HE)/chemical sump were contaminated. The concentrations of lead, mercury, and uranium are given.
Statistical Analysis of Microarray Data with Replicated Spots: A Case Study withSynechococcusWH8102
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.
2009-01-01
Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmoreto the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.less
DIESEL TRUCK IDLING EMISSIONS - MEASUREMENTS AT A PM2.5 HOT SPOT
Parks, II, James E; Miller, Terry L.; Storey, John Morse; Fu, Joshua S.; Hromis, Boris
2007-01-01
The University of Tennessee and Oak Ridge National Laboratory conducted a 5-month long air monitoring study at the Watt Road interchange on I-40 in Knoxville Tennessee where there are 20,000 heavy-duty trucks per day traveling the interstate. In addition, there are 3 large truck stops at this interchange where as many as 400 trucks idle engines at night. As a result, high levels of PM2.5 were measured near the interchange often exceeding National Ambient Air Quality Standards. This paper presents the results of the air monitoring study illustrating the hourly, day-of-week, and seasonal patterns of PM2.5 resulting from diesel truck emissions on the interstate and at the truck stops. Surprisingly, most of the PM2.5 concentrations occurred during the night when the largest contribution of emissions was from idling trucks rather than trucks on the interstate. A nearby background air monitoring site was used to identify the contribution of regional PM2.5 emissions which also contribute significantly to the concentrations measured at the site. The relative contributions of regional background, local truck idling and trucks on the interstate to local PM2.5 concentrations are presented and discussed in the paper. The results indicate the potential significance of diesel truck idling emissions to the occurrence of hot-spots of high PM2.5 concentrations near large truck stops, ports or border crossings.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.
2009-01-01
Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore » to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.
Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Dorrer, C.; Consentino, A.; Irwin, D.
2016-05-18
Characterizing the prepulse temporal contrast of optical pulses is required to understand their interaction with matter. Light with relatively low intensity can interact with the target before the main high-intensity pulse. Estimating the intensity contrast, instead of the spatially averaged power contrast, is important to understand intensity-dependent laser–matter interactions. A direct optical approach to determining the on-shot intensity of the incoherent pedestal on an aberrated high-intensity laser system is presented. The spatially resolved focal spot of the incoherent pedestal preceding the main coherent pulse and the intensity contrast are calculated using experimental data. Furthermore, this technique is experimentally validated onmore » one of the chirped pulse amplification beamlines of the OMEGA EP Laser System. The intensity contrast of a 1-kJ, 10-ps laser pulse is shown to be ~10× higher than the power contrast because of the larger spatial extent of the incoherent focal spot relative to the coherent focal spot.« less
MacGregor, P.R.
1989-01-01
The National Energy Act, in general, and Section 210 of the Public Utilities Regulatory Policies Act (PURPA) of 1978 in particular, have dramatically stimulated increasing levels of independent non-utility power generation. As these levels of independent non-utility power generation increase, the electric utility is subjected to new and significant operational and financial impacts. One important concern is the net revenue impact on the utility which is the focus of the research discussed in this thesis and which is inextricably intertwined with the operational functions of the utility system. In general, non-utility generation, and specifically, cogeneration, impact utility revenues by affecting the structure and magnitude of the system load, the scheduling of utility generation, and the reliability of the composite system. These effects are examined by developing a comprehensive model non-utility independent power producing facilities, referenced as Small Power Producing Facilities, a cash-flow-based corporate model of the electric utility, a thermal plant based generation scheduling algorithm, and a system reliability evaluation. All of these components are integrated into an iterative closed loop solution algorithm to both assess and enhance the net revenue. In this solution algorithm, the spot pricing policy of the utility is the principal control mechanism in the process and the system reliability is the primary procedural constraint. A key issue in reducing the negative financial impact of non-utility generation is the possibility of shutting down utility generation units given sufficient magnitudes of non-utility generation in the system. A case study simulating the financial and system operations of the Georgia Power Company with representative cogeneration capacity and individual plant characteristics is analyzed in order to demonstrate the solution process.
Bauge, E.
2015-01-15
The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Beland, Laurent Karim; Osetskiy, Yury N.; Stoller, Roger E.; Xu, Haixuan
2015-02-07
Here, we present a comparison of the Kinetic Activation–Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly Kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation–Relaxation Technique nouveau provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16,000-atom boxes. Generally speaking, k-ART’s treatment ofmore » geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC’s, while the later’s concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not accessible if not localizing calculations.« less
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Turrell, A.E. Sherlock, M.; Rose, S.J.
2015-10-15
Large-angle Coulomb collisions allow for the exchange of a significant proportion of the energy of a particle in a single collision, but are not included in models of plasmas based on fluids, the Vlasov–Fokker–Planck equation, or currently available plasma Monte Carlo techniques. Their unique effects include the creation of fast ‘knock-on’ ions, which may be more likely to undergo certain reactions, and distortions to ion distribution functions relative to what is predicted by small-angle collision only theories. We present a computational method which uses Monte Carlo techniques to include the effects of large-angle Coulomb collisions in plasmas and which self-consistently evolves distribution functions according to the creation of knock-on ions of any generation. The method is used to demonstrate ion distribution function distortions in an inertial confinement fusion (ICF) relevant scenario of the slowing of fusion products.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage. Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)
2007-03-21
In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.
Willert, Jeffrey Park, H.
2014-11-01
In this article we explore the possibility of replacing Standard Monte Carlo (SMC) transport sweeps within a Moment-Based Accelerated Thermal Radiative Transfer (TRT) algorithm with a Residual Monte Carlo (RMC) formulation. Previous Moment-Based Accelerated TRT implementations have encountered trouble when stochastic noise from SMC transport sweeps accumulates over several iterations and pollutes the low-order system. With RMC we hope to significantly lower the build-up of statistical error at a much lower cost. First, we display encouraging results for a zero-dimensional test problem. Then, we demonstrate that we can achieve a lower degree of error in two one-dimensional test problems by employing an RMC transport sweep with multiple orders of magnitude fewer particles per sweep. We find that by reformulating the high-order problem, we can compute more accurate solutions at a fraction of the cost.
Non-Covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| Argonne Leadership Computing Facility DFT, and a box including 64 molecules. By performing benchmark QMC calculations on snapshots of this type, researchers are able to ascertain DFT errors. Dario Alfè, University College London Non-Covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo PI Name: Dario Alfe PI Email: d.alfe@ucl.ac.uk Institution: University College London Allocation Program: INCITE Allocation Hours at ALCF: 56 Million Year: 2014 Research Domain: Materials
Non-covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
| Argonne Leadership Computing Facility DFT A snapshot of a liquid water simulation performed with DFT, and a box including 64 molecules. By performing benchmark QMC calculations on snapshots of this type, researchers are able to ascertain DFT errors. Credit: Dario Alfè, University College London Non-covalent Bonding in Complex Molecular Systems with Quantum Monte Carlo PI Name: Dario Alfè PI Email: d.alfe@ucl.ac.uk Institution: University College London Allocation Program: INCITE
Energetic Aspects of CO2 Absorption by Ionic Liquids from Quantum Monte
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Carlo | Argonne Leadership Computing Facility The main image shows a projection of the energetic landscape of a CO2 molecule (oriented along the X-axis) from a high-dimensional space of QMC random walks into the real space. Inset: a matching two-dimensional slice containing the linear CO2 molecule; blue colors correspond to the nuclear regions where electrons experience strong attractive potential. William Lester, Jr. Energetic Aspects of CO2 Absorption by Ionic Liquids from Quantum Monte
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Fully Differential Monte-Carlo Generator Dedicated to TMDs and Bessel-Weighted Asymmetries
Aghasyan, Mher M.; Avakian, Harut A.
2013-10-01
We present studies of double longitudinal spin asymmetries in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator, which includes quark intrinsic transverse momentum within the generalized parton model based on the fully differential cross section for the process. Additionally, we apply Bessel-weighting to the simulated events to extract transverse momentum dependent parton distribution functions and also discuss possible uncertainties due to kinematic correlation effects.
Prez-Andjar, Anglica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)
2013-12-15
Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.
Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors
Qiang, Ji; Corlett, John; Staples, John
2009-03-02
In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H{sub 2} ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns.
Fullrmc, A Rigid Body Reverse Monte Carlo Modeling Package Enabled With
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Machine Learning And Artificial Intelligence - Joint Center for Energy Storage Research January 22, 2016, Research Highlights Fullrmc, A Rigid Body Reverse Monte Carlo Modeling Package Enabled With Machine Learning And Artificial Intelligence Liquid Sulfur. Sx≤8 molecules recognized and built upon modelling Scientific Achievement Novel approach to reverse modelling atomic and molecular systems from a set of experimental data and constraints. New fitting concepts such as 'Group',
Application of Diffusion Monte Carlo to Materials Dominated by van der Waals Interactions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Benali, Anouar; Shulenburger, Luke; Romero, Nichols A.; Kim, Jeongnim; von Lilienfeld, O. Anatole
2014-06-12
Van der Waals forces are notoriously difficult to account for from first principles. We perform extensive calculation to assess the usefulness and validity of diffusion quantum Monte Carlo when applied to van der Waals forces. We present results for noble gas solids and clusters - archetypical van der Waals dominated assemblies, as well as a relevant pi-pi stacking supramolecular complex: DNA + intercalating anti-cancer drug Ellipticine.
SU-E-J-144: Low Activity Studies of Carbon 11 Activation Via GATE Monte Carlo
Elmekawy, A; Ewell, L; Butuceanu, C; Qu, L
2015-06-15
Purpose: To investigate the behavior of a Monte Carlo simulation code with low levels of activity (∼1,000Bq). Such activity levels are expected from phantoms and patients activated via a proton therapy beam. Methods: Three different ranges for a therapeutic proton radiation beam were examined in a Monte Carlo simulation code: 13.5, 17.0 and 21.0cm. For each range, the decay of an equivalent length{sup 11}C source and additional sources of length plus or minus one cm was studied in a benchmark PET simulation for activities of 1000, 2000 and 3000Bq. The ranges were chosen to coincide with a previous activation study, and the activities were chosen to coincide with the approximate level of isotope creation expected in a phantom or patient irradiated by a therapeutic proton beam. The GATE 7.0 simulation was completed on a cluster node, running Scientific Linux Carbon 6 (Red Hat©). The resulting Monte Carlo data were investigated with the ROOT (CERN) analysis tool. The half-life of{sup 11}C was extracted via a histogram fit to the number of simulated PET events vs. time. Results: The average slope of the deviation of the extracted carbon half life from the expected/nominal value vs. activity showed a generally positive value. This was unexpected, as the deviation should, in principal, decrease with increased activity and lower statistical uncertainty. Conclusion: For activity levels on the order of 1,000Bq, the behavior of a benchmark PET test was somewhat unexpected. It is important to be aware of the limitations of low activity PET images, and low activity Monte Carlo simulations. This work was funded in part by the Philips corporation.
Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5 Posters Monte Carlo Simulation of Longwave Fluxes Through Broken Scattering Cloud Fields E. E. Takara and R. G. Ellingson University of Maryland College Park, Maryland To simplify the analysis, we made several assumptions: the clouds were cuboidal; they were all identically sized and shaped; and they had constant optical properties. Results and Discussion The model was run for a set of cloud fields with clouds of varying optical thickness and scattering albedo. The predicted effective cloud
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 ; Ji, Weixiao; Blaisten-Barojas, Estela; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Howard, M; Beltran, C; Herman, M
2014-06-01
Purpose: To investigate the influence of the minimum monitor unit (MU) on the quality of clinical treatment plans for scanned proton therapy. Methods: Delivery system characteristics limit the minimum number of protons that can be delivered per spot, resulting in a min-MU limit. Plan quality can be impacted by the min-MU limit. Two sites were used to investigate the impact of min-MU on treatment plans: pediatric brain tumor at a depth of 5-10 cm; a head and neck tumor at a depth of 1-20 cm. Three field intensity modulated spot scanning proton plans were created for each site with the following parameter variations: min-MU limit range of 0.0000-0.0060; and spot spacing range of 0.5-2.0σ of the nominal spot size at isocenter in water (σ=4mm in this work). Comparisons were based on target homogeneity and normal tissue sparing. Results: The increase of the min-MU with a fixed spot spacing decreases plan quality both in homogeneous target coverage and in the avoidance of critical structures. Both head and neck and pediatric brain plans show a 20% increase in relative dose for the hot spot in the CTV and 10% increase in key critical structures when comparing min-MU limits of 0.0000 and 0.0060 with a fixed spot spacing of 1σ. The DVHs of CTVs show min-MU limits of 0.0000 and 0.0010 produce similar plan quality and quality decreases as the min-MU limit increases beyond 0.0020. As spot spacing approaches 2σ, degradation in plan quality is observed when no min-MU limit is imposed. Conclusion: Given a fixed spot spacing of ≤ 1σ of the spot size in water, plan quality decreases as min- MU increases greater than 0.0020. The effect of min-MU should be taken into consideration while planning spot scanning proton therapy treatments to realize its full potential.
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
Lai, Chao-Jen Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2015-06-15
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical
Miles, Michael; Karki, U.; Hovanski, Yuri
2014-10-01
Friction-stir spot welding (FSSW) has been shown to be capable of joining advanced high-strength steel, with its flexibility in controlling the heat of welding and the resulting microstructure of the joint. This makes FSSW a potential alternative to resistance spot welding if tool life is sufficiently high, and if machine spindle loads are sufficiently low that the process can be implemented on an industrial robot. Robots for spot welding can typically sustain vertical loads of about 8 kN, but FSSW at tool speeds of less than 3000 rpm cause loads that are too high, in the range of 1114 kN. Therefore, in the current work, tool speeds of 5000 rpm were employed to generate heat more quickly and to reduce welding loads to acceptable levels. Si3N4 tools were used for the welding experiments on 1.2-mm DP 980 steel. The FSSW process was modeled with a finite element approach using the Forge* software. An updated Lagrangian scheme with explicit time integration was employed to predict the flow of the sheet material, subjected to boundary conditions of a rotating tool and a fixed backing plate. Material flow was calculated from a velocity field that is two-dimensional, but heat generated by friction was computed by a novel approach, where the rotational velocity component imparted to the sheet by the tool surface was included in the thermal boundary conditions. An isotropic, viscoplastic Norton-Hoff law was used to compute the material flow stress as a function of strain, strain rate, and temperature. The model predicted welding temperatures to within percent, and the position of the joint interface to within 10 percent, of the experimental results.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Gray, Michael A.; Andersen, Gary L.; Mormile, Melanie R.
2014-10-07
Coral disease is one of the major causes of reef degradation. Dark Spot Syndrome (DSS) was described in the early 1990's as brown or purple amorphous areas of tissue on a coral and has since become one of the most prevalent diseases reported on Caribbean reefs. It has been identified in a number of coral species, but there is debate as to whether it is in fact the same disease in different corals. Further, it is questioned whether these macroscopic signs are in fact diagnostic of an infectious disease at all. The most commonly affected species in the Caribbean ismore » the massive starlet coral Siderastrea siderea. We sampled this species in two locations, Dry Tortugas National Park and Virgin Islands National Park. Tissue biopsies were collected from both healthy colonies and those with dark spot lesions. Microbial-community DNA was extracted from coral samples (mucus, tissue, and skeleton), amplified using bacterial-specific primers, and applied to PhyloChip G3 microarrays to examine the bacterial diversity associated with this coral. Samples were also screened for the presence of a fungal ribotype that has recently been implicated as a causative agent of DSS in another coral species, but the amplifications were unsuccessful. S. siderea samples did not cluster consistently based on health state (i.e., normal versus dark spot). Various bacteria, including Cyanobacteria and Vibrios, were observed to have increased relative abundance in the discolored tissue, but the patterns were not consistent across all DSS samples. Overall, our findings do not support the hypothesis that DSS in S. siderea is linked to a bacterial pathogen or pathogens. This dataset provides the most comprehensive overview to date of the bacterial community associated with the scleractinian coral S. siderea.« less
Oxygen chemisorption on Cu(19 19 1) studied by spot profile analysis low-energy electron diffraction
Brandstetter, T.; Draxler, M.; Hohage, M.; Zeppenfeld, P.
2007-12-15
Cu(110) and the vicinal Cu(19 19 1) surfaces were characterized by recording maps of the reciprocal space by means of spot profile analysis low-energy electron diffraction (SPA-LEED). For both surfaces, kinematic simulations were performed to get insight into the main features of the experimental data. Furthermore, it is shown that chemisorption of oxygen and subsequent annealing lead to the formation of a Cu-CuO stripe phase and induce faceting of the Cu(19 19 1) surface. The evolution from the clean Cu(19 19 1) surface to the coexistence of the (110) and (111) facets with increasing oxygen exposure was characterized by SPA-LEED.
NONE
1998-01-01
The Bear Creek Valley Floodplain Hot Spot Removal Action Project Plan, Oak Ridge Y-12 Plant, Oak Ridge, Tennessee (Y/ER-301) was prepared (1) to safely, cost-effectively, and efficiently evaluate the environmental impact of solid material in the two debris areas in the context of industrial land uses (as defined in the Bear Creek Valley Feasibility Study) to support the Engineering Evaluation/Cost Assessment and (2) to evaluate, define, and implement the actions to mitigate these impacts. This work was performed under Work Breakdown Structure 1.x.01.20.01.08.
Lillaney, Prasheel; Shin, Mihye; Conolly, Steven M.; Fahrig, Rebecca
2012-09-15
Purpose: Combining x-ray fluoroscopy and MR imaging systems for guidance of interventional procedures has become more commonplace. By designing an x-ray tube that is immune to the magnetic fields outside of the MR bore, the two systems can be placed in close proximity to each other. A major obstacle to robust x-ray tube design is correcting for the effects of the magnetic fields on the x-ray tube focal spot. A potential solution is to design active shielding that locally cancels the magnetic fields near the focal spot. Methods: An iterative optimization algorithm is implemented to design resistive active shielding coils that will be placed outside the x-ray tube insert. The optimization procedure attempts to minimize the power consumption of the shielding coils while satisfying magnetic field homogeneity constraints. The algorithm is composed of a linear programming step and a nonlinear programming step that are interleaved with each other. The coil results are verified using a finite element space charge simulation of the electron beam inside the x-ray tube. To alleviate heating concerns an optimized coil solution is derived that includes a neodymium permanent magnet. Any demagnetization of the permanent magnet is calculated prior to solving for the optimized coils. The temperature dynamics of the coil solutions are calculated using a lumped parameter model, which is used to estimate operation times of the coils before temperature failure. Results: For a magnetic field strength of 88 mT, the algorithm solves for coils that consume 588 A/cm{sup 2}. This specific coil geometry can operate for 15 min continuously before reaching temperature failure. By including a neodymium magnet in the design the current density drops to 337 A/cm{sup 2}, which increases the operation time to 59 min. Space charge simulations verify that the coil designs are effective, but for oblique x-ray tube geometries there is still distortion of the focal spot shape along with deflections of
Grosshans, David R.; Zhu, X. Ronald; Melancon, Adam; Allen, Pamela K.; Poenisch, Falk; Palmer, Matthew; McAleer, Mary Frances; McGovern, Susan L.; Gillin, Michael; DeMonte, Franco; Chang, Eric L.; Brown, Paul D.; Mahajan, Anita
2014-11-01
Purpose: To describe treatment planning techniques and early clinical outcomes in patients treated with spot scanning proton therapy for chordoma or chondrosarcoma of the skull base. Methods and Materials: From June 2010 through August 2011, 15 patients were treated with spot scanning proton therapy for chordoma (n=10) or chondrosarcoma (n=5) at a single institution. Toxicity was prospectively evaluated and scored weekly and at all follow-up visits according to Common Terminology Criteria for Adverse Events, version 3.0. Treatment planning techniques and dosimetric data were recorded and compared with those of passive scattering plans created with clinically applicable dose constraints. Results: Ten patients were treated with single-field-optimized scanning beam plans and 5 with multifield-optimized intensity modulated proton therapy. All but 2 patients received a simultaneous integrated boost as well. The mean prescribed radiation doses were 69.8 Gy (relative biological effectiveness [RBE]; range, 68-70 Gy [RBE]) for chordoma and 68.4 Gy (RBE) (range, 66-70) for chondrosarcoma. In comparison with passive scattering plans, spot scanning plans demonstrated improved high-dose conformality and sparing of temporal lobes and brainstem. Clinically, the most common acute toxicities included fatigue (grade 2 for 2 patients, grade 1 for 8 patients) and nausea (grade 2 for 2 patients, grade 1 for 6 patients). No toxicities of grades 3 to 5 were recorded. At a median follow-up time of 27 months (range, 13-42 months), 1 patient had experienced local recurrence and a second developed distant metastatic disease. Two patients had magnetic resonance imaging-documented temporal lobe changes, and a third patient developed facial numbness. No other subacute or late effects were recorded. Conclusions: In comparison to passive scattering, treatment plans for spot scanning proton therapy displayed improved high-dose conformality. Clinically, the treatment was well tolerated, and
Signal processing Model/Method for Recovering Acoustic Reflectivity of Spot Weld
Energy Science and Technology Software Center (OSTI)
2005-09-08
Until recently, U.S. auto manufacturers have inspected the veracity of welds in the auto bodies they build by using destructive tear-down, which typically results in more than $1 M of scrappage per plant per year. Much of this expense could possibly be avoided with a nondestructive technique (and 100% instead of 1% inspection could be achieved). Recent advances in ultrasound probes promise to provide a sufficiently accurate non-destructive evaluation technique, but the necessary signal processingmore » has not yet been developed. This disclosure describes a signal processing model and method useful for diagnosing the veracity of spot welds between two sheets of the same thickness from ultrasound signals Standard systems theory describes a signal as a convolution of a transducer function, h(t), and an impulse train (beta(t), tau(t)) [1] (see Eq. (1) attached). With a Gaussian wavelet as a transducer function, this model describes the signal from an ultrasound probe quite well, and the literature provides many methods for "deconvolution," for recovery of the impulse train from the signal [see e.g., 2-3]. What is novel about the technique disclosed is the model that describes the impulse train as a function of reflectivity, the share of energy incident on the interface that is reflected, and that allows the recovery of its estimated value. The reflectivity estimate provides an ideal indicator of weld veracity, compressing each signal into a single value between 0 and 1, which can then be displayed as a 2d greyscale or colormap of the weld. The model describing the system is attached as Eqs. (2). These equations account for the energy in the probe-side and opposite sheets. In each period, this energy is a sum of that reflected from the same sheet plus that transmitted from the opposite (dampened by material attenuation at rate a). This model is consistent with physical first principles (in particular the First and Second Laws of Thermodynamics) and has been verified
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS
Shafer, J.D.; Shepard, J.R.
1997-04-01
We derive an approximate renormalization group (RG) flow equation for the local effective potential of single-component {phi}{sup 4} field theory at finite temperature. Previous zero-temperature RG equations are recovered in the low- and high-temperature limits, in the latter case, via the phenomenon of dimensional reduction. We numerically solve our RG equations to obtain local effective potentials at finite temperature. These are found to be in excellent agreement with Monte Carlo results, especially when lattice artifacts are accounted for in the RG treatment. {copyright} {ital 1997} {ital The American Physical Society}
Neutron matter with Quantum Monte Carlo: chiral 3N forces and static response
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Buraczynski, M.; Gandolfi, S.; Gezerlis, A.; Schwenk, A.; Tews, I.
2016-03-01
Neutron matter is related to the physics of neutron stars and that of neutron-rich nuclei. Moreover, Quantum Monte Carlo (QMC) methods offer a unique way of solving the many-body problem non-perturbatively, providing feedback on features of nuclear interactions and addressing scenarios that are inaccessible to other approaches. Our contribution goes over two recent accomplishments in the theory of neutron matter: a) the fusing of QMC with chiral effective field theory interactions, focusing on local chiral 3N forces, and b) the first attempt to find an ab initio solution to the problem of static response.
Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques
Energy Science and Technology Software Center (OSTI)
2014-12-01
Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random numbermore » and for measuring the time of simulation.« less
Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations
Jo, Y.; Yun, S.; Cho, N. Z.
2013-07-01
In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)
Study of DCX reaction on medium nuclei with Monte-Carlo Shell Model
Wu, H. C.; Gibbs, W. R.
2010-08-04
In this work a method is introduced to calculate the DCX reaction in the framework of Monte-Carlo Shell Model (MCSM). To facilitate the use of Zero-temperature formalism of MCSM, the Double-Isobaric-Analog State (DIAS) is derived from the ground state by using isospin shifting operator. The validity of this method is tested by comparing the MCSM results to those of the SU(3) symmetry case. Application of this method to DCX on {sup 56}Fe and {sup 93}Nb is discussed.
Quantized vortices in {sup 4}He droplets: A quantum Monte Carlo study
Sola, E.; Casulleras, J.; Boronat, J.
2007-08-01
We present a diffusion Monte Carlo study of a vortex line excitation attached to the center of a {sup 4}He droplet at zero temperature. The vortex energy is estimated for droplets of increasing number of atoms, from N=70 up to 300, showing a monotonous increase with N. The evolution of the core radius and its associated energy, the core energy, is also studied as a function of N. The core radius is {approx}1 A in the center and increases when approaching the droplet surface; the core energy per unit volume stabilizes at a value 2.8 K{sigma}{sup -3} ({sigma}=2.556 A) for N{>=}200.
Quantum Monte Carlo simulation of a two-dimensional Bose gas
Pilati, S.; Boronat, J.; Casulleras, J.; Giorgini, S.
2005-02-01
The equation of state of a homogeneous two-dimensional Bose gas is calculated using quantum Monte Carlo methods. The low-density universal behavior is investigated using different interatomic model potentials, both finite ranged and strictly repulsive and zero ranged, supporting a bound state. The condensate fraction and the pair distribution function are calculated as a function of the gas parameter, ranging from the dilute to the strongly correlated regime. In the case of the zero-range pseudopotential we discuss the stability of the gaslike state for large values of the two-dimensional scattering length, and we calculate the critical density where the system becomes unstable against cluster formation.
W/Z + b bbar/Jets at NLO Using the Monte Carlo MCFM
John M. Campbell
2001-05-29
We summarize recent progress in next-to-leading QCD calculations made using the Monte Carlo MCFM. In particular, we focus on the calculations of p{bar p} {r_arrow} Wb{bar b}, Zb{bar b} and highlight the significant corrections to background estimates for Higgs searches in the channels WH and ZH at the Tevatron. We also report on the current progress of, and strategies for, the calculation of the process p{bar p} {r_arrow} W/Z + 2 jets.
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study
Alfonso, Dominic R.; Tafen, De Nyago
2015-04-28
The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out for the purpose of understanding the predicted trends.
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Exponentially-convergent Monte Carlo for the 1-D transport equation
Peterson, J. R.; Morel, J. E.; Ragusa, J. C.
2013-07-01
We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)
Monte Carlo generators for studies of the 3D structure of the nucleon
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona
2012-01-01
The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona
2012-02-15
The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.
TH-C-18A-10: The Influence of Tube Current On X-Ray Focal Spot Size for 70 KV CT Imaging
Duan, X; Grimes, J; Yu, L; Leng, S; McCollough, C
2014-06-15
Purpose: Focal spot blooming is an increase in the focal spot size at increased tube current and/or decreased tube potential. In this work, we evaluated the influence of tube current on the focal spot size at low kV for two CT systems, one of which used a tube designed to reduce blooming effects. Methods: A slit camera (10 micron slit) was used to measure focal spot size on two CT scanners from the same manufacturer (Siemens Somatom Force and Definition Flash) at 70 kV and low, medium and maximum tube currents, according to the capabilities of each system (Force: 100, 800 and 1300 mA; Flash: 100, 200 and 500 mA). Exposures were made with a stationary tube in service mode using a raised stand without table movement or flying focal spot technique. Focal spot size, nominally 0.8 and 1.2 mm, respectively, was measured parallel and perpendicular to the cathode-anode axis by calculating the full-width-at-half-maximum of the slit profile recording using computed radiographic plates. Results: Focal spot sizes perpendicular to the anode-cathode axis increased at the maximum mA by 5.7% on the Force and 39.1% on the Flash relative to that at the minimal mA, even though the mA was increased 13-fold on the Force and only 5- fold on the Flash. Focal spot size increased parallel to the anode-cathode axis by 70.4% on Force and 40.9% on Flash. Conclusion: For CT protocols using low kV, high mA is typically required. These protocols are relevant in children and smaller adults, and for dual-energy scanning. Technical measures to limit focal spot blooming are important in these settings to avoid reduced spatial resolution. The x-ray tube on a recently-introduced scanner appears to greatly reduce blooming effects, even at very high mA values. CHM has research support from Siemens Healthcare.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2007-03-01
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS). DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. The critical fusion zone sizes to ensure nugget pull-out failure mode are developed for both DP800 and TRIP800 using the limit load based analytical model and the micro-hardness measurements of the weld cross sections. Static weld strength tests using cross tension samples were performed on the joint populations with controlled fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied using statistical data analysis tools. The results in this study show that the conventional weld size of 4 t1/2 can not produce nugget pullout mode for both the DP800 and TRIP800 materials. The results also suggest that performance based spot weld acceptance criteria should be developed for different AHSS spot welds.
Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations
Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.
2005-03-31
The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John; Miller, Eric; Solomon, Clell J. Jr.; Dennis, Ben; Meldrum, Amy; Clarke, Shaun; Pozzi, Sara
2012-06-21
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
Energy Science and Technology Software Center (OSTI)
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
Tsvetkov, Pavel V.; Ames II, David E.; Alajo, Ayodeji B.; Pritchard, Megan L.
2006-07-01
Partitioning and transmutation of minor actinides are expected to have a positive impact on the future of nuclear technology. Their deployment would lead to incineration of hazardous nuclides and could potentially provide additional fuel supply. The U.S. DOE NERI Project assesses the possibility, advantages and limitations of involving minor actinides as a fuel component. The analysis takes into consideration and compares capabilities of actinide-fueled VHTRs with pebble-bed and prismatic cores to approach a reactor lifetime long operation without intermediate refueling. A hybrid Monte Carlo-deterministic methodology has been adopted for coupled neutronics-thermal hydraulics design studies of VHTRs. Within the computational scheme, the key technical issues are being addressed and resolved by implementing efficient automated modeling procedures and sequences, combining Monte Carlo and deterministic approaches, developing and applying realistic 3D coupled neutronics-thermal-hydraulics models with multi-heterogeneity treatments, developing and performing experimental/computational benchmarks for model verification and validation, analyzing uncertainty effects and error propagation. This paper introduces the suggested modeling approach, discusses benchmark results and the preliminary analysis of actinide-fueled VHTRs. The presented up-to-date results are in agreement with the available experimental data. Studies of VHTRs with minor actinides suggest promising performance. (authors)
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric; Grubb, Thomas L.; Engel, Michael; Glotzer, Sharon C.
2013-12-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
Monte Carlo analysis of neutron slowing-down-time spectrometer for fast reactor spent fuel assay
Chen, Jianwei; Lineberry, Michael
2007-07-01
Using the neutron slowing-down-time method as a nondestructive assay tool to improve input material accountancy for fast reactor spent fuel reprocessing is under investigation at Idaho State University. Monte Carlo analyses were performed to simulate the neutron slowing down process in different slowing down spectrometers, namely, lead and graphite, and determine their main parameters. {sup 238}U threshold fission chamber response was simulated in the Monte Carlo model to represent the spent fuel assay signals, the signature (fission/time) signals of {sup 235}U, {sup 239}Pu, and {sup 241}Pu were simulated as a convolution of fission cross sections and neutron flux inside the spent fuel. {sup 238}U detector signals were analyzed using linear regression model based on the signatures of fissile materials in the spent fuel to determine weight fractions of fissile materials in the Advanced Burner Test Reactor spent fuel. The preliminary results show even though lead spectrometer showed a better assay performance than graphite, graphite spectrometer could accurately determine weight fractions of {sup 239}Pu and {sup 241}Pu given proper assay energy range were chosen. (authors)
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T; Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiple scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.
Berg, John M.; Veirs, D. Kirk; Vaughn, Randolph B.; Cisneros, Michael R.; Smith, Coleman A.
2000-06-01
Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Regan, S. P.; Goncharov, V. N.; Igumenshchev, I. V.; Sangster, T. C.; Betti, R.; Bose, A.; Boehly, T. R.; Bonino, M. J.; Campbell, E. M.; Cao, D.; et al
2016-07-07
A record fuel hot-spot pressure Phs = 56±7 Gbar was inferred from x-ray and nuclear diagnostics for direct-drive inertial confinement fusion cryogenic, layered deuterium–tritium implosions on the 60-beam, 30-kJ, 351-nm OMEGA Laser System. When hydrodynamically scaled to the energy of the National Ignition Facility (NIF), these implosions achieved a Lawson parameter ~60% of the value required for ignition [A. Bose et al., Phys. Rev. E (in press)], similar to indirect-drive implosions [R. Betti et al., Phys. Rev. Lett. 114, 255003 (2015)], and nearly half of the direct-drive ignition-threshold pressure. Relative to symmetric, one-dimensional simulations, the inferred hot-spot pressure is ~40%more » lower. Furthermore, three-dimensional simulations suggest that low-mode distortion of the hot spot seeded by laser-drive nonuniformity and target-positioning error reduces target performance.« less
Stoker, J; Summers, P; Li, X; Gomez, D; Sahoo, N; Zhu, X; Gillin, M
2014-06-01
Purpose: This study seeks to evaluate the dosimetric effects of intra-fraction motion during spot scanning proton beam therapy as a function of beam-scan orientation and target motion amplitude. Method: Multiple 4DCT scans were collected of a dynamic anthropomorphic phantom mimicking respiration amplitudes of 0 (static), 0.5, 1.0, and 1.5 cm. A spot-scanning treatment plan was developed on the maximum intensity projection image set, using an inverse-planning approach. Dynamic phantom motion was continuous throughout treatment plan delivery.The target nodule was designed to accommodate film and thermoluminescent dosimeters (TLD). Film and TLDs were uniquely labeled by location within the target. The phantom was localized on the treatment table using the clinically available orthogonal kV on-board imaging device. Film inserts provided data for dose uniformity; TLDs provided a 3% precision estimate of absolute dose. An inhouse script was developed to modify the delivery order of the beam spots, to orient the scanning direction parallel or perpendicular to target motion.TLD detector characterization and analysis was performed by the Imaging and Radiation Oncology Core group (IROC)-Houston. Film inserts, exhibiting a spatial resolution of 1mm, were analyzed to determine dose homogeneity within the radiation target. Results: Parallel scanning and target motions exhibited reduced target dose heterogeneity, relative to perpendicular scanning orientation. The average percent deviation in absolute dose for the motion deliveries relative to the static delivery was 4.9±1.1% for parallel scanning, and 11.7±3.5% (p<<0.05) for perpendicularly oriented scanning. Individual delivery dose deviations were not necessarily correlated to amplitude of motion for either scan orientation. Conclusions: Results demonstrate a quantifiable difference in dose heterogeneity as a function of scan orientation, more so than target amplitude. Comparison to the analyzed planar dose of a single
Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.
2014-11-15
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.
2006-10-15
By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichman, David R.
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
CASL-U-2015-0170-000-a SHIFT: A New Monte Carlo Package Seth R. Johnson
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
-a SHIFT: A New Monte Carlo Package Seth R. Johnson Tara M. Pandya, Gregory G. Davidson, Thomas M. Evans, and Steven P. Hamilton , Cihangir Celik, Aarno Isotalo, Chris Peretti Oak Ridge National Laboratory April 19, 2015 CASL-U-2015-0170-000-a ORNL is managed by UT-Battelle for the U.S. Department of Energy Seth R Johnson R&D Staff, Monte Carlo Methods Radiation Transport Group Exnihilo team: Greg Davidson Tom Evans Stephen Hamilton Seth Johnson Tara Pandya Associate developers: Cihangir
White, Glen; Seryi, Andrei; Woodley, Mark; Bai, Sha; Bambade, Philip; Renier, Yves; Bolzon, Benoit; Kamiya, Yoshio; Komamiya, Sachio; Oroku, Masahiro; Yamaguchi, Yohei; Yamanaka, Takashi; Kubo, Kiyoshi; Kuroda, Shigeru; Okugi, Toshiyuki; Tauchi, Toshiaki; Marin, Eduardo; /CERN
2012-07-06
The primary aim of the ATF2 research accelerator is to test a scaled version of the final focus optics planned for use in next-generation linear lepton colliders. ATF2 consists of a 1.3 GeV linac, damping ring providing low-emittance electron beams (< 12pm in the vertical plane), extraction line and final focus optics. The design details of the final focus optics and implementation at ATF2 are presented elsewhere. The ATF2 accelerator is currently being commissioned, with a staged approach to achieving the design IP spot size. It is expected that as we implement more demanding optics and reduce the vertical beta function at the IP, the tuning becomes more difficult and takes longer. We present here a description of the implementation of the tuning procedures and describe operational experiences and performances.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Orth, Charles D.
2016-02-23
We suggest that a potentially dominant but previously neglected source of pusher-fuel and hot-spot “mix” may have been the main degradation mechanism for fusion energy yields of modern inertial confinement fusion (ICF) capsules designed and fielded to achieve high yields — not hydrodynamic instabilities. This potentially dominant mix source is the spallation of small chunks or “grains” of pusher material into the fuel regions whenever (1) the solid material adjacent to the fuel changes its phase by nucleation, and (2) this solid material spalls under shock loading and sudden decompression. Finally, we describe this mix mechanism, support it with simulationsmore » and experimental evidence, and explain how to eliminate it and thereby allow higher yields for ICF capsules and possibly ignition at the National Ignition Facility.« less
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
Energy Science and Technology Software Center (OSTI)
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less
Size and habit evolution of PETN crystals - a lattice Monte Carlo study
Zepeda-Ruiz, L A; Maiti, A; Gee, R; Gilmer, G H; Weeks, B
2006-02-28
Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphology as a function of the rate of particle addition relative to diffusion.
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Alfonso, Dominic R.; Tafen, De Nyago
2015-04-28
The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out formore » the purpose of understanding the predicted trends.« less
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that theremore » is a universal behavior, consistent with previous work in this field that RMI growth follows.« less
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Use of decision tree analysis and Monte Carlo simulation for downhole material selection
Cheldi, T.; Cavassi, P.; Lazzari, L.; Pezzotta, L.
1997-08-01
The paper describes how corrosion engineers can use the decision tree analysis in order to evaluate and select the best materials for the compilation of a new oil field, characterized by high CO{sub 2} and H{sub 2}S content. The method has been based on the decision tree analysis and the Monte Carlo simulation to obtain the probability distribution of some events to occur (for instance, number of workovers, corrosion inhibitor efficiency, coating damage rate). The corrosion study leads to four different technical solutions, showing different risk and reliability: carbon steel with corrosion allowance and inhibitor injection, coated tubing, and two corrosion resistant alloys, a superduplex stainless steel and a superaustenitic stainless steel. The cost comparison has been carried out by using the Expected Monetary Value criterion applied to the Life Cycle Cost evaluation. The paper presents and discusses the decision tree and the results of simulations.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (OSTI)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
Direct Monte Carlo simulation of the chemical equilibrium composition of detonation products
Shaw, M.S.
1993-06-01
A new Monte Carlo simulation method has been developed by the author which gives the equilibrium chemical composition of a molecular fluid directly. The usual NPT ensemble (isothermal-isobaric) is implemented with N being the number of atoms instead of molecules. Changes in chemical composition are treated as correlated spatial moves of atoms. Given the interaction potentials between molecular products, ``exact`` EOS points including the equilibrium chemical composition can be determined from the simulations. This method is applied to detonation products at conditions in the region near the Chapman- Jouget state. For the example of NO, it is shown that the CJ detonation velocity can be determined to a few meters per second. A rather small change in cross potentials is shown to shift the chemical equilibrium and the CJ conditions significantly.
Iterative Monte Carlo analysis of spin-dependent parton distributions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; Ethier, Jacob J.; Accardi, Alberto
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d2 moment of the nucleon within a global PDF analysis.« less
Incorporating Experimental Information in the Total Monte Carlo Methodology Using File Weights
Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rochman, D.; Alhassan, E.; Pomp, S.
2015-01-15
Some criticism has been directed towards the Total Monte Carlo method because experimental information has not been taken into account in a statistically well-founded manner. In this work, a Bayesian calibration method is implemented by assigning weights to the random nuclear data files and the method is illustratively applied to a few applications. In some considered cases, the estimated nuclear data uncertainties are significantly reduced and the central values are significantly shifted. The study suggests that the method can be applied both to estimate uncertainties in a more justified way and in the search for better central values. Some improvements are however necessary; for example, the treatment of outliers and cross-experimental correlations should be more rigorous and random files that are intended to be prior files should be generated.
Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Velizhanin, Kirill A.; Saxena, Avadh
2015-11-11
The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less
Monte Carlo simulation of elongating metallic nanowires in the presence of surfactants
Gimenez, M. Cecilia; Reinaudi, Luis Leiva, Ezequiel P. M.
2015-12-28
Nanowires of different metals undergoing elongation were studied by means of canonical Monte Carlo simulations and the embedded atom method representing the interatomic potentials. The presence of a surfactant medium was emulated by the introduction of an additional stabilization energy, represented by a parameter Q. Several values of the parameter Q and temperatures were analyzed. In general, it was observed for all studied metals that, as Q increases, there is a greater elongation before the nanowire breaks. In the case of silver, linear monatomic chains several atoms long formed at intermediate values of Q and low temperatures. Similar observations were made for the case of silver-gold alloys when the medium interacted selectively with Ag.
penORNL: a parallel monte carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Kinetic Monte Carlo simulations of scintillation processes in NaI(Tl)
Kerisit, Sebastien N.; Wang, Zhiguo; Williams, Richard; Grim, Joel; Gao, Fei
2014-04-26
Developing a comprehensive understanding of the processes that govern the scintillation behavior of inorganic scintillators provides a pathway to optimize current scintillators and allows for the science-driven search for new scintillator materials. Recent experimental data on the excitation density dependence of the light yield of inorganic scintillators presents an opportunity to incorporate parameterized interactions between excitations in scintillation models and thus enable more realistic simulations of the nonproportionality of inorganic scintillators. Therefore, a kinetic Monte Carlo (KMC) model of elementary scintillation processes in NaI(Tl) is developed in this work to simulate the kinetics of scintillation for a range of temperatures and Tl concentrations as well as the scintillation efficiency as a function of excitation density. The ability of the KMC model to reproduce available experimental data allows for elucidating the elementary processes that give rise to the kinetics and efficiency of scintillation observed experimentally for a range of conditions.
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
Density-functional Monte-Carlo simulation of CuZn order-disorder transition
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khan, Suffian N.; Eisenbach, Markus
2016-01-25
We perform a Wang-Landau Monte Carlo simulation of a Cu0.5Zn0.5 order-disorder transition using 250 atoms and pairwise atom swaps inside a 5 x 5 x 5 BCC supercell. Each time step uses energies calculated from density functional theory (DFT) via the all-electron Korringa-Kohn- Rostoker method and self-consistent potentials. Here we find CuZn undergoes a transition from a disordered A2 to an ordered B2 structure, as observed in experiment. Our calculated transition temperature is near 870 K, comparing favorably to the known experimental peak at 750 K. We also plot the entropy, temperature, specific-heat, and short-range order as a function ofmore » internal energy.« less
Silica separation from reinjection brines at Monte Amiata geothermal plants, Italy
Vitolo, S.; Cialdella, M.L. . Dipartimento di Ingegneria Chimica)
1994-06-01
A process for the separation of silica from geothermal reinjection brines is reported, in which the phases of coagulation, sedimentation and filtration of silica are involved. The effectiveness of lime and calcium chloride as coagulating agents has been investigated and the separating operations have been set out. Attention has been focused on Monte Amiata reinjection geothermal brines, whose scaling effect causes serious problems in the operation and maintenance of reinjection facilities. The study has been conducted using different amounts of added coagulants and at different temperatures, to determine optimal operating conditions. Though calcium chloride was revealed to be effective as a coagulant of the polymeric silica fraction, lime has also proved capable of removing monomeric dissolved silica at high dosages. Investigation on the behavior of coagulated brine has revealed the feasibility of separating the coagulated silica by sedimentation and filtration.
Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems
Martinez, E.; Monasterio, P.R.; Marian, J.
2011-02-20
An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.
Report on International Collaboration Involving the FE Heater and HG-A Tests at Mont Terri
Houseworth, Jim; Rutqvist, Jonny; Asahina, Daisuke; Chen, Fei; Vilarrasa, Victor; Liu, Hui-Hai; Birkholzer, Jens
2013-11-06
Nuclear waste programs outside of the US have focused on different host rock types for geological disposal of high-level radioactive waste. Several countries, including France, Switzerland, Belgium, and Japan are exploring the possibility of waste disposal in shale and other clay-rich rock that fall within the general classification of argillaceous rock. This rock type is also of interest for the US program because the US has extensive sedimentary basins containing large deposits of argillaceous rock. LBNL, as part of the DOE-NE Used Fuel Disposition Campaign, is collaborating on some of the underground research laboratory (URL) activities at the Mont Terri URL near Saint-Ursanne, Switzerland. The Mont Terri project, which began in 1995, has developed a URL at a depth of about 300 m in a stiff clay formation called the Opalinus Clay. Our current collaboration efforts include two test modeling activities for the FE heater test and the HG-A leak-off test. This report documents results concerning our current modeling of these field tests. The overall objectives of these activities include an improved understanding of and advanced relevant modeling capabilities for EDZ evolution in clay repositories and the associated coupled processes, and to develop a technical basis for the maximum allowable temperature for a clay repository. The R&D activities documented in this report are part of the work package of natural system evaluation and tool development that directly supports the following Used Fuel Disposition Campaign (UFDC) objectives: ? Develop a fundamental understanding of disposal-system performance in a range of environments for potential wastes that could arise from future nuclear-fuel-cycle alternatives through theory, simulation, testing, and experimentation. ? Develop a computational modeling capability for the performance of storage and disposal options for a range of fuel-cycle alternatives, evolving from generic models to more robust models of performance
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amore » finding in stark contrast to DAC data.« less
Clay, Raymond C.; Mcminis, Jeremy; McMahon, Jeffrey M.; Pierleoni, Carlo; Ceperley, David M.; Morales, Miguel A.
2014-05-01
The ab initio phase diagram of dense hydrogen is very sensitive to errors in the treatment of electronic correlation. Recently, it has been shown that the choice of the density functional has a large effect on the predicted location of both the liquid-liquid phase transition and the solid insulator-to-metal transition in dense hydrogen. To identify the most accurate functional for dense hydrogen applications, we systematically benchmark some of the most commonly used functionals using quantum Monte Carlo. By considering several measures of functional accuracy, we conclude that the van der Waals and hybrid functionals significantly outperform local density approximation and Perdew-Burke-Ernzerhof. We support these conclusions by analyzing the impact of functional choice on structural optimization in the molecular solid, and on the location of the liquid-liquid phase transition.
Ab initio molecular dynamics simulation of liquid water by quantum Monte Carlo
Zen, Andrea; Luo, Ye Mazzola, Guglielmo Sorella, Sandro; Guidoni, Leonardo
2015-04-14
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article, we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous density functional theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab initio simulations of complex chemical systems.
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that there is a universal behavior, consistent with previous work in this field that RMI growth follows.
Code System to Perform Monte Carlo Simulation of Electron Gamma-Ray Showers in Arbitrary Marerials.
Energy Science and Technology Software Center (OSTI)
2002-10-15
Version 00 PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary materials. Initially, it was devised to simulate the PENetration and Energy LOss of Positrons and Electrons in matter; photons were introduced later. The adopted scattering model gives a reliable description of radiation transport in the energy range from a few hundred eV to about 1GeV. PENELOPE generates random electron-photon showers in complex material structures consisting of any number of distinct homogeneous regions (bodies)more » with different compositions. The Penelope Forum list archives and other information can be accessed at http://www.nea.fr/lists/penelope.html. PENELOPE-MPI extends capabilities of PENELOPE-2001 (RSICC C00682MNYCP02; NEA-1525/05) by providing for usage of MPI type parallel drivers and extends the original version's ability to read different types of input data sets such as voxel. The motivation is to increase efficiency of Monte Carlo simulations for medical applications. The physics of the calculations have not been changed, and the original description of PENELOPE-2001 (which follows) is still valid. PENELOPE-2001 contains substantial changes and improvements to the previous versions 1996 and 2000. As for the physics, the model for electron/positron elastic scattering has been revised. Bremsstrahlung emission is now simulated using partial-wave data instead of analytical approximate formulae. Photoelectric absorption in K and L-shells is described from the corresponding partial cross sections. Fluorescence radiation from vacancies in K and L-shells is followed. Refinements were also introduced in electron/positron transport mechanics, mostly to account for energy dependence of the mean free paths for hard events. Simulation routines were re-programmed in a more structured way, and new example MAIN programs were written with a more flexible input and expanded output.« less
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835846). Phase space data generated for Varian linac photon beams (6 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-coupled- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB
Neutrinos from WIMP annihilations obtained using a full three-flavor Monte Carlo approach
Blennow, Mattias; Ohlsson, Tommy; Edsjoe, Joakim E-mail: edsjo@physto.se
2008-01-15
Weakly interacting massive particles (WIMPs) are one of the main candidates for making up the dark matter in the Universe. If these particles make up the dark matter, then they can be captured by the Sun or the Earth, sink to the respective cores, annihilate, and produce neutrinos. Thus, these neutrinos can be a striking dark matter signature at neutrino telescopes looking towards the Sun and/or the Earth. Here, we improve previous analyses on computing the neutrino yields from WIMP annihilations in several respects. We include neutrino oscillations in a full three-flavor framework as well as all effects from neutrino interactions on the way through the Sun (absorption, energy loss, and regeneration from tau decays). In addition, we study the effects of non-zero values of the mixing angle {theta}{sub 13} as well as the normal and inverted neutrino mass hierarchies. Our study is performed in an event-based setting which makes these results very useful both for theoretical analyses and for building a neutrino telescope Monte Carlo code. All our results for the neutrino yields, as well as our Monte Carlo code, are publicly available. We find that the yield of muon-type neutrinos from WIMP annihilations in the Sun is enhanced or suppressed, depending on the dominant WIMP annihilation channel. This effect is due to an effective flavor mixing caused by neutrino oscillations. For WIMP annihilations inside the Earth, the distance from source to detector is too small to allow for any significant amount of oscillations at the neutrino energies relevant for neutrino telescopes.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
SciThur AM: YIS - 04: Gold Nanoparticle Enhanced Arc Radiotherapy: A Monte Carlo Feasibility Study
Koger, B; Kirkby, C
2014-08-15
Introduction: The use of gold nanoparticles (GNPs) in radiotherapy has shown promise for therapeutic enhancement. In this study, we explore the feasibility of enhancing radiotherapy with GNPs in an arc-therapy context. We use Monte Carlo simulations to quantify the macroscopic dose-enhancement ratio (DER) and tumour to normal tissue ratio (TNTR) as functions of photon energy over various tumour and body geometries. Methods: GNP-enhanced arc radiotherapy (GEART) was simulated using the PENELOPE Monte Carlo code and penEasy main program. We simulated 360 arc-therapy with monoenergetic photon energies 50 1000 keV and several clinical spectra used to treat a spherical tumour containing uniformly distributed GNPs in a cylindrical tissue phantom. Various geometries were used to simulate different tumour sizes and depths. Voxel dose was used to calculate DERs and TNTRs. Inhomogeneity effects were examined through skull dose in brain tumour treatment simulations. Results: Below 100 keV, DERs greater than 2.0 were observed. Compared to 6 MV, tumour dose at low energies was more conformai, with lower normal tissue dose and higher TNTRs. Both the DER and TNTR increased with increasing cylinder radius and decreasing tumour radius. The inclusion of bone showed excellent tumour conformality at low energies, though with an increase in skull dose (40% of tumour dose with 100 keV compared to 25% with 6 MV). Conclusions: Even in the presence of inhomogeneities, our results show promise for the treatment of deep-seated tumours with low-energy GEART, with greater tumour dose conformality and lower normal tissue dose than 6 MV.
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
Allen, C; Sansourekidou, P; Pavord, D
2014-06-01
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases.
Cluster expansion modeling and Monte Carlo simulation of alnico 57 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 57. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 57 at atomistic and nano scales. The alnico 57 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on ?-site and Ni and Co on ?-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 57 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Radaev, A. I. Schurovskaya, M. V.
2015-12-15
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2007-01-01
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS). DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. The critical fusion zone sizes to ensure nugget pull-out failure mode are developed for both DP800 and TRIP800 using limit load based analytical model and micro-hardness measurements of the weld cross sections. Static weld strength tests using cross tension samples were performed on the joint populations with controlled fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied for all the weld populations using statistical data analysis tools. The results in this study show that AHSS spot welds with fusion zone size of can not produce nugget pullout mode for both the DP800 and TRIP800 materials examined. The critical fusion zone size for nugget pullout shall be derived for individual materials based on different base metal properties as well as different heat affected zone (HAZ) and weld properties resulted from different welding parameters.
Lin, L; Huang, S; Kang, M; Solberg, T; McDonough, J; Ainsley, C
2015-06-15
Purpose: The purpose of this manuscript is to demonstrate the utility of a comprehensive test pattern in validating calculation models of the low-dose tails of proton pencil beam scanning (PBS) spots. Such a pattern has been used previously for quality assurance purposes to assess spot shape and location, and for determining monitor units. Methods: In this study, a scintillation detector was used to measure the test pattern in air at isocenter for two proton beam energies (115 and 225 MeV) of two IBA universal nozzles (UN). Planar measurements were compared with calculated dose distribution based on the weighted superposition of spot profiles previously measured using a pair-magnification method. Results: Including the halo component below 1% of the central dose is shown to improve the gamma-map comparison between calculation and measurement from 94.9% to 98.4% using 2 mm/2% criteria for the 115 MeV proton beam of UN #1. In contrast, including the halo component below 1% of the central dose does not improve the gamma agreement for the 115 MeV proton beam of UN #2, due to the cutoff of the halo component at off-axis locations. When location-dependent spot profiles are used for calculation instead of spot profiles at central axis, the gamma agreement is improved from 98.0% to 99.5% using 2 mm/2% criteria. The cutoff of the halo component is smaller at higher energies, and is not observable for the 225 MeV proton beam for UN #2. Conclusion: In conclusion, the use of a comprehensive test pattern can facilitate the validation of the halo component of proton PBS spots at off axis locations. The cutoff of the halo component should be taken into consideration for large fields or PBS systems that intend to trim spot profiles using apertures. This work was supported by the US Army Medical Research and Materiel Command under Contract Agreement No. DAMD17-W81XWH-07-2-0121 and W81XWH-09-2-0174.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2008-06-01
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS) under lap shear loading condition. DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. Static weld strength tests using lap shear samples were performed on the joint populations with various fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied for all the weld populations using statistical data analysis tools. The results in this study show that AHSS spot welds with conventionally required fusion zone size of can not produce nugget pullout mode for both the DP800 and TRIP800 welds under lap shear loading. Moreover, failure mode has strong influence on weld peak load and energy absorption for all the DP800 welds and the TRIP800 small welds: welds failed in pullout mode have statistically higher strength and energy absorption than those failed in interfacial fracture mode. For TRIP800 welds above the critical fusion zone level, the influence of weld failure modes on peak load and energy absorption diminishes. Scatter plots of peak load and energy absorption versus weld fusion zone size were then constructed, and the results indicate that fusion zone size is the most critical factor in weld quality in terms of peak load and energy absorption for both DP800 and TRIP800 spot welds.
Wayne Chuko; Jerry Gould
2002-07-08
This report describes work accomplished in the project, titled ''Development of Appropriate Resistance Spot Welding Practice for Transformation-Hardened Steels.'' The Phase 1 of the program involved development of in-situ temper diagrams for two gauges of representative dual-phase and martensitic grades of steels. The results showed that tempering is an effective way of reducing hold-time sensitivity (HTS) in hardenable high-strength sheet steels. In Phase 2, post-weld cooling rate techniques, incorporating tempering, were evaluated to reduce HTS for the same four steels. Three alternative methods, viz., post-heating, downsloping, and spike tempering, for HTS reduction were investigated. Downsloping was selected for detailed additional study, as it appeared to be the most promising of the cooling rate control methods. The downsloping maps for each of the candidate steels were used to locate the conditions necessary for the peak response. Three specific downslope conditions (at a fix ed final current for each material, timed for a zero-, medium-, and full-softening response) were chosen for further metallurgical and mechanical testing. Representative samples, were inspected metallographically, examining both local hardness variations and microstructures. The resulting downslope diagrams were found to consist largely of a C-curve. The softening observed in these curves, however, was not supported by subsequent metallography, which showed that all welds made, regardless of material and downslope condition, were essentially martensitic. CCT/TTT diagrams, generated based on microstructural modeling done at Oak Ridge National Laboratories, showed that minimum downslope times of 2 and 10 s for the martensitic and dual-phase grades of steels, respectively, were required to avoid martensite formation. These times, however, were beyond those examined in this study. These results show that downsloping is not an effective means of reducing HTS for production resistance spot
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2011-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method
MO-G-BRF-09: Investigating Magnetic Field Dose Effects in Mice: A Monte Carlo Study
Rubinstein, A; Guindani, M; Followill, D; Melancon, A; Hazle, J; Court, L
2014-06-15
Purpose: In MRI-linac treatments, radiation dose distributions are affected by magnetic fields, especially at high-density/low-density interfaces. Radiobiological consequences of magnetic field dose effects are presently unknown; therefore, preclinical studies are needed to ensure the safe clinical use of MRI-linacs. This study investigates the optimal combination of beam energy and magnetic field strength needed for preclinical murine studies. Methods: The Monte Carlo code MCNP6 was used to simulate the effects of a magnetic field when irradiating a mouse-sized lung phantom with a 1.0cmx1.0cm photon beam. Magnetic field effects were examined using various beam energies (225kVp, 662keV[Cs-137], and 1.25MeV[Co-60]) and magnetic field strengths (0.75T, 1.5T, and 3T). The resulting dose distributions were compared to Monte Carlo results for humans with various field sizes and patient geometries using a 6MV/1.5T MRI-linac. Results: In human simulations, the addition of a 1.5T magnetic field caused an average dose increase of 49% (range:36%60%) to lung at the soft tissue-to-lung interface and an average dose decrease of 30% (range:25%36%) at the lung-to-soft tissue interface. In mouse simulations, the magnetic fields had no effect on the 225kVp dose distribution. The dose increases for the Cs-137 beam were 12%, 33%, and 49% for 0.75T, 1.5T, and 3.0T magnetic fields, respectively while the dose decreases were 7%, 23%, and 33%. For the Co-60 beam, the dose increases were 14%, 45%, and 41%, and the dose decreases were 18%, 35%, and 35%. Conclusion: The magnetic field dose effects observed in mouse phantoms using a Co-60 beam with 1.5T or 3T fields and a Cs-137 beam with a 3T field compare well with those seen in simulated human treatments with an MRI-linac. These irradiator/magnet combinations are suitable for preclinical studies investigating potential biological effects of delivering radiation therapy in the presence of a magnetic field. Partially funded by Elekta.
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Energy Science and Technology Software Center (OSTI)
2013-06-24
Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check authors homepage for related information: http
Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine
Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.
2013-02-15
Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the
Bao, Chen; Wu, Hongfei; Li, Li; Newcomer, Darrell R.; Long, Philip E.; Williams, Kenneth H.
2014-09-02
We aim to understand the scale-dependent evolution of uranium bioreduction during a field experiment at a former uranium mill site near Rifle, Colorado. Acetate was injected to stimulate Fe-reducing bacteria (FeRB) and to immobilize aqueous U(VI) to insoluble U(IV). Bicarbonate was coinjected in half of the domain to mobilize sorbed U(VI). We used reactive transport modeling to integrate hydraulic and geochemical data and to quantify rates at the grid block (0.25 m) and experimental field scale (tens of meters). Although local rates varied by orders of magnitude in conjunction with biostimulation fronts propagating downstream, field-scale rates were dominated by those orders of magnitude higher rates at a few selected hot spots where Fe(III), U(VI), and FeRB were at their maxima in the vicinity of the injection wells. At particular locations, the hot moments with maximum rates negatively corresponded to their distance from the injection wells. Although bicarbonate injection enhanced local rates near the injection wells by a maximum of 39.4%, its effect at the field scale was limited to a maximum of 10.0%. We propose a rate-versus-measurement-length relationship (log R' = -0.63
Umegaki, K; Matsuura, T.; Takao, S.; Nihongi, H.; Yamada, T.; Miyamoto, N.; Shimizu, S.; Shirato, H.; Matsuda, K.; Nakamura, F.; Umezawa, M.; Hiramoto, K.
2014-06-01
Purpose: A novel Proton Beam Therapy system has been developed by integrating Real-Time Tumor-Tracking (RTRT) and discrete spot scanning techniques. The system dedicated for spot scanning delivers significant advantages for both clinical and economical points of view. The system has the ability to control dose distribution with spot scanning beams and to gate the beams from the synchrotron to irradiate moving tumors only when the actual positions of them are within the planned position. Methods: The newly designed system consists of a synchrotron, beam transport systems, a compact and rotating gantry system with robotic couch and two orthogonal sets of X-ray fluoroscopes. The fully compact design of the system has been realized by reducing the maximum energy of the beam to 220MeV, corresponding to 30g/cm2 range and the number of circulating protons per synchrotron operation cycle, due to higher beam utilization efficiency in spot scanning. To improve the irradiation efficiency in the integration of RTRT and spot scanning, a new control system has been developed to enable multiple gated irradiation per operation cycle according to the gating signals. After the completion of the equipment installation, beam tests and commissioning has been successfully performed. Results: The basic performances and beam characteristics through the synchrotron accelerator to iso-center have been confirmed and the performance test of the irradiation nozzle and whole system has been appropriately completed. CBCT image has been checked and sufficient quality was obtained. RTRT system has been demonstrated and realized accurate dose distributions for moving targets. Conclusion: The gated spot scanning Proton Beam Therapy system with Real-Time Tumor-Tracking has been developed, successfully installed and tested. The new system enables us to deliver higher dose to the moving target tumors while sparing surrounding normal tissues and to realize the compact design of the system and facility
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less
Cascade annealing simulations of bcc iron using object kinetic Monte Carlo
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
Simulations of displacement cascade annealing were carried out using object kinetic Monte Carlo based on an extensive MD database including various primary knock-on atom energies and directions. The sensitivity of the results to a broad range of material and model parameters was examined. The diffusion mechanism of interstitial clusters has been identified to have the most significant impact on the fraction of stable interstitials that escape the cascade region. The maximum level of recombination was observed for the limiting case in which all interstitial clusters exhibit 3D random walk diffusion. The OKMC model was parameterized using two alternative sets of defect migration and binding energies, one from ab initio calculations and the second from an empirical potential. The two sets of data predict essentially the same fraction of surviving defects but different times associated with the defect escape processes. This study provides a comprehensive picture of the first phase of long-term defect evolution in bcc iron and generates information that can be used as input data for mean field rate theory (MFRT) to predict the microstructure evolution of materials under irradiation. In addition, the limitations of the current OKMC model are discussed and a potential way to overcome these limitations is outlined.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Da, B.; Li, Z. Y.; Chang, H. C.; Ding, Z. J.; Mao, S. F.
2014-09-28
It has been experimentally found that the carbon surface contamination influences strongly the spectrum signals in reflection electron energy loss spectroscopy (REELS) especially at low primary electron energy. However, there is still little theoretical work dealing with the carbon contamination effect in REELS. Such a work is required to predict REELS spectrum for layered structural sample, providing an understanding of the experimental phenomena observed. In this study, we present a numerical calculation result on the spatially varying differential inelastic mean free path for a sample made of a carbon contamination layer of varied thickness on a SrTiO{sub 3} substrate. A Monte Carlo simulation model for electron interaction with a layered structural sample is built by combining this inelastic scattering cross-section with the Mott's cross-section for electron elastic scattering. The simulation results have clearly shown that the contribution of the electron energy loss from carbon surface contamination increases with decreasing primary energy due to increased individual scattering processes along trajectory parts carbon contamination layer. Comparison of the simulated spectra for different thicknesses of the carbon contamination layer and for different primary electron energies with experimental spectra clearly identifies that the carbon contamination in the measured sample was in the form of discontinuous islands other than the uniform film.
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N.; Tzeli, Despoina S.; Anastasiadis, Spiros H.; Bitsanis, Ioannis A.
2014-05-28
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 5005000 units) and very differing energetic conditions for the two blocks (very goodalmost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J. Cygler, Joanna E.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5; Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 ; Tchistiakova, Ekaterina; Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9; Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 ; La Russa, Daniel J.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5
2014-02-15
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering, photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.
Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC): Fundamentals and Applications
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
The fundamentals of the framework and the details of each component of the self-evolving atomistic kinetic Monte Carlo (SEAKMC) are presented. The strength of this new technique is the ability to simulate dynamic processes with atomistic fidelity that is comparable to molecular dynamics (MD) but on a much longer time scale. The observation that the dimer method preferentially finds the saddle point (SP) with the lowest energy is investigated and found to be true only for defects with high symmetry. In order to estimate the fidelity of dynamics and accuracy of the simulation time, a general criterion is proposed and applied to two representative problems. Applications of SEAKMC for investigating the diffusion of interstitials and vacancies in bcc iron are presented and compared directly with MD simulations, demonstrating that SEAKMC provides results that formerly could be obtained only through MD. The correlation factor for interstitial diffusion in the dumbbell configuration, which is extremely difficult to obtain using MD, is predicted using SEAKMC. The limitations of SEAKMC are also discussed. The paper presents a comprehensive picture of the SEAKMC method in both its unique predictive capabilities and technically important details.
Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.; Gomez, Maria A.
2015-04-21
A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that this secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; Morales, Maguel A.
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)_{n} organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochastic methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.
Characterizing the three-orbital Hubbard model with determinant quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kung, Y. F.; Chen, C. -C.; Wang, Yao; Huang, E. W.; Nowadnick, E. A.; Moritz, B.; Scalettar, R. T.; Johnston, S.; Devereaux, T. P.
2016-04-29
Here, we characterize the three-orbital Hubbard model using state-of-the-art determinant quantum Monte Carlo (DQMC) simulations with parameters relevant to the cuprate high-temperature superconductors. The simulations find that doped holes preferentially reside on oxygen orbitals and that the (π,π) antiferromagnetic ordering vector dominates in the vicinity of the undoped system, as known from experiments. The orbitally-resolved spectral functions agree well with photoemission spectroscopy studies and enable identification of orbital content in the bands. A comparison of DQMC results with exact diagonalization and cluster perturbation theory studies elucidates how these different numerical techniques complement one another to produce a more complete understandingmore » of the model and the cuprates. Interestingly, our DQMC simulations predict a charge-transfer gap that is significantly smaller than the direct (optical) gap measured in experiment. Most likely, it corresponds to the indirect gap that has recently been suggested to be on the order of 0.8 eV, and demonstrates the subtlety in identifying charge gaps.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)n organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochasticmore » methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.« less
Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation
Moucka, Filip; Bratko, Dusan Luzar, Alenka
2015-03-28
Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores.
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R; Whitaker, J Michael
2013-01-01
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.
2013-07-01
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
The hydrophobic effect in a simple isotropic water-like model: Monte Carlo study
Huš, Matej; Urbic, Tomaz
2014-04-14
Using Monte Carlo computer simulations, we show that a simple isotropic water-like model with two characteristic lengths can reproduce the hydrophobic effect and the solvation properties of small and large non-polar solutes. Influence of temperature, pressure, and solute size on the thermodynamic properties of apolar solute solvation in a water model was systematically studied, showing two different solvation regimes. Small particles can fit into the cavities around the solvent particles, inducing additional order in the system and lowering the overall entropy. Large particles force the solvent to disrupt their network, increasing the entropy of the system. At low temperatures, the ordering effect of small solutes is very pronounced. Above the cross-over temperature, which strongly depends on the solute size, the entropy change becomes strictly positive. Pressure dependence was also investigated, showing a “cross-over pressure” where the entropy and enthalpy of solvation are the lowest. These results suggest two fundamentally different solvation mechanisms, as observed experimentally in water and computationally in various water-like models.
Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Uribe, R. M.; Salvat, F.; Cleland, M. R.; Berejka, A.
2009-03-10
The Monte Carlo code PENELOPE was used to simulate the irradiation of alanine coated film dosimeters with electron beams of energies from 1 to 5 MeV being produced by a high-current industrial electron accelerator. This code includes a geometry package that defines complex quadratic geometries, such as those of the irradiation of products in an irradiation processing facility. In the present case the energy deposited on a water film at the surface of a wood parallelepiped was calculated using the program PENMAIN, which is a generic main program included in the PENELOPE distribution package. The results from the simulation were then compared with measurements performed by irradiating alanine film dosimeters with electrons using a 150 kW Dynamitron electron accelerator. The alanine films were placed on top of a set of wooden planks using the same geometrical arrangement as the one used for the simulation. The way the results from the simulation can be correlated with the actual measurements, taking into account the irradiation parameters, is described. An estimation of the percentage difference between measurements and calculations is also presented.
Code System for Monte Carlo Simulation of Electron and Photon Transport.
Energy Science and Technology Software Center (OSTI)
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering,more » photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.« less
Monte Carlo modeling of neutron and gamma-ray imaging systems
Hall, J.
1996-04-01
Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ``real world`` complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Quantum Monte Carlo calculation of the binding energy of the beryllium dimer
Deible, Michael J.; Kessler, Melody; Gasperich, Kevin E.; Jordan, Kenneth D.
2015-08-28
The accurate calculation of the binding energy of the beryllium dimer is a challenging theoretical problem. In this study, the binding energy of Be{sub 2} is calculated using the diffusion Monte Carlo (DMC) method, using single Slater determinant and multiconfigurational trial functions. DMC calculations using single-determinant trial wave functions of orbitals obtained from density functional theory calculations overestimate the binding energy, while DMC calculations using Hartree-Fock or CAS(4,8), complete active space trial functions significantly underestimate the binding energy. In order to obtain an accurate value of the binding energy of Be{sub 2} from DMC calculations, it is necessary to employ trial functions that include excitations outside the valence space. Our best estimate DMC result for the binding energy of Be{sub 2}, obtained by using configuration interaction trial functions and extrapolating in the threshold for the configurations retained in the trial function, is 908 cm{sup −1}, only slightly below the 935 cm{sup −1} value derived from experiment.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate the dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.
Calculation of complete fusion cross sections of heavy ion reactions using the Monte Carlo method
Ghodsi, O. N.; Mahmoodi, M.; Ariai, J.
2007-03-15
The nucleus-nucleus potential for the fusion reactions {sup 40}Ca+{sup 48}Ca, {sup 16}O+{sup 208}Pb, and {sup 48}Ca+{sup 48}Ca has been calculated using the Monte Carlo method. The results obtained indicate that the technique employed for the calculation of the nucleus-nucleus potential is an efficient one. The effects of the spin and the isospin terms have also been studied using the same technique. The analysis of the results obtained for the {sup 48}Ca+{sup 48}Ca reaction reveal that the isospin-dependent term in the nucleon-nucleon potential causes the nuclear potential to drop by an amount of 0.5 MeV. The analytical calculations of the fusion cross section, particularly those at energies less than the fusion barrier, are in good agreement with the experimental data. In these calculations the effective nucleon-nucleon potential chosen is of the M3Y-Paris potential form and no adjustable parameter has been used.
Mller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method Preprint Michael Kuss, Tony Markel, and William Kramer Presented at the 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition Shenzhen, China November 5 - 9, 2010 Conference Paper NREL/CP-5400-48827 January 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor
atl?, Serap; Tan?r, Gne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang
2008-04-15
This article presents a brachytherapy source having {sup 103}Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model {sup 103}Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA-{sup 103}Pd source in water was found to be 0.678 cGy h{sup -1} U{sup -1} with an approximate uncertainty of {+-}0.1%. The anisotropy function, F(r,{theta}), and the radial dose function, g(r), of the IRA-{sup 103}Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms.
Camden, Jon P
2013-07-16
A major component of this proposal is to elucidate the connection between optical and electron excitation of plasmon modes in metallic nanostructures. These accomplishments are reported: developed a routine protocol for obtaining spatially resolved, low energy EELS spectra, and resonance Rayleigh scattering spectra from the same nanostructures.; correlated optical scattering spectra and plasmon maps obtained using STEM/EELS.; and imaged electromagnetic hot spots responsible for single-molecule surface-enhanced Raman scattering (SMSERS).
SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine
Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L
2014-06-01
Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Yousef, Adel K. M.; Taha, Ziad A.; Shehab, Abeer A.
2011-01-17
This paper describes the development of a computer model used to analyze the heat flow during pulsed Nd: YAG laser spot welding of dissimilar metal; low carbon steel (1020) to aluminum alloy (6061). The model is built using ANSYS FLUENT 3.6 software where almost all the environments simulated to be similar to the experimental environments. A simulation analysis was implemented based on conduction heat transfer out of the key hole where no melting occurs. The effect of laser power and pulse duration was studied.Three peak powers 1, 1.66 and 2.5 kW were varied during pulsed laser spot welding (keeping the energy constant), also the effect of two pulse durations 4 and 8 ms (with constant peak power), on the transient temperature distribution and weld pool dimension were predicated using the present simulation. It was found that the present simulation model can give an indication for choosing the suitable laser parameters (i.e. pulse durations, peak power and interaction time required) during pulsed laser spot welding of dissimilar metals.
Both, Stefan; Shen, Jiajian; Kirk, Maura; Lin, Liyong; Tang, Shikui; Alonso-Basanta, Michelle; Lustig, Robert; Lin, Haibo; Deville, Curtiland; Hill-Kayser, Christine; Tochner, Zelig; McDonough, James
2014-09-01
Purpose: To report on a universal bolus (UB) designed to replace the range shifter (RS); the UB allows the treatment of shallow tumors while keeping the pencil beam scanning (PBS) spot size small. Methods and Materials: Ten patients with brain cancers treated from 2010 to 2011 were planned using the PBS technique with bolus and the RS. In-air spot sizes of the pencil beam were measured and compared for 4 conditions (open field, with RS, and with UB at 2- and 8-cm air gap) in isocentric geometry. The UB was applied in our clinic to treat brain tumors, and the plans with UB were compared with the plans with RS. Results: A UB of 5.5 cm water equivalent thickness was found to meet the needs of the majority of patients. By using the UB, the PBS spot sizes are similar with the open beam (P>.1). The heterogeneity index was found to be approximately 10% lower for the UB plans than for the RS plans. The coverage for plans with UB is more conformal than for plans with RS; the largest increase in sparing is usually for peripheral organs at risk. Conclusions: The integrity of the physical properties of the PBS beam can be maintained using a UB that allows for highly conformal PBS treatment design, even in a simple geometry of the fixed beam line when noncoplanar beams are used.
Pugh, Thomas J. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Munsell, Mark F. [Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Choi, Seungtaek; Nguyen, Quyhn Nhu; Mathai, Benson [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhu, X. Ron; Sahoo, Narayan; Gillin, Michael; Johnson, Jennifer L.; Amos, Richard A. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Mahmood, Usama; Kuban, Deborah A.; Frank, Steven J.; Hoffman, Karen E.; McGuire, Sean E. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Lee, Andrew K., E-mail: aklee@mdanderson.org [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)
2013-12-01
Purpose: To report quality of life (QOL)/toxicity in men treated with proton beam therapy for localized prostate cancer and to compare outcomes between passively scattered proton therapy (PSPT) and spot-scanning proton therapy (SSPT). Methods and Materials: Men with localized prostate cancer enrolled on a prospective QOL protocol with a minimum of 2 years' follow-up were reviewed. Comparative groups were defined by technique (PSPT vs SSPT). Patients completed Expanded Prostate Cancer Index Composite questionnaires at baseline and every 3-6 months after proton beam therapy. Clinically meaningful differences in QOL were defined as ?0.5 baseline standard deviation. The cumulative incidence of modified Radiation Therapy Oncology Group grade ?2 gastrointestinal (GI) or genitourinary (GU) toxicity and argon plasma coagulation were determined by the Kaplan-Meier method. Results: A total of 226 men received PSPT, and 65 received SSPT. Both PSPT and SSPT resulted in statistically significant changes in sexual, urinary, and bowel Expanded Prostate Cancer Index Composite summary scores. Only bowel summary, function, and bother resulted in clinically meaningful decrements beyond treatment completion. The decrement in bowel QOL persisted through 24-month follow-up. Cumulative grade ?2 GU and GI toxicity at 24 months were 13.4% and 9.6%, respectively. There was 1 grade 3 GI toxicity (PSPT group) and no other grade ?3 GI or GU toxicity. Argon plasma coagulation application was infrequent (PSPT 4.4% vs SSPT 1.5%; P=.21). No statistically significant differences were appreciated between PSPT and SSPT regarding toxicity or QOL. Conclusion: Both PSPT and SSPT confer low rates of grade ?2 GI or GU toxicity, with preservation of meaningful sexual and urinary QOL at 24 months. A modest, yet clinically meaningful, decrement in bowel QOL was seen throughout follow-up. No toxicity or QOL differences between PSPT and SSPT were identified. Long-term comparative results in a larger patient
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lssl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
Energy Science and Technology Software Center (OSTI)
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects onemore » of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
Energy Science and Technology Software Center (OSTI)
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, wasmore » developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.« less
BENCHMARK TESTS FOR MARKOV CHAIN MONTE CARLO FITTING OF EXOPLANET ECLIPSE OBSERVATIONS
Rogers, Justin; Lopez-Morales, Mercedes; Apai, Daniel; Adams, Elisabeth
2013-04-10
Ground-based observations of exoplanet eclipses provide important clues to the planets' atmospheric physics, yet systematics in light curve analyses are not fully understood. It is unknown if measurements suggesting near-infrared flux densities brighter than models predict are real, or artifacts of the analysis processes. We created a large suite of model light curves, using both synthetic and real noise, and tested the common process of light curve modeling and parameter optimization with a Markov Chain Monte Carlo algorithm. With synthetic white noise models, we find that input eclipse signals are generally recovered within 10% accuracy for eclipse depths greater than the noise amplitude, and to smaller depths for higher sampling rates and longer baselines. Red noise models see greater discrepancies between input and measured eclipse signals, often biased in one direction. Finally, we find that in real data, systematic biases result even with a complex model to account for trends, and significant false eclipse signals may appear in a non-Gaussian distribution. To quantify the bias and validate an eclipse measurement, we compare both the planet-hosting star and several of its neighbors to a separately chosen control sample of field stars. Re-examining the Rogers et al. Ks-band measurement of CoRoT-1b finds an eclipse 3190{sup +370}{sub -440} ppm deep centered at {phi}{sub me} = 0.50418{sup +0.00197}{sub -0.00203}. Finally, we provide and recommend the use of selected data sets we generated as a benchmark test for eclipse modeling and analysis routines, and propose criteria to verify eclipse detections.
Silva-Rodrguez, Jess Aguiar, Pablo; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela , 15782, Galicia; Grupo de Imaxe Molecular, Instituto de Investigacin Sanitarias , Santiago de Compostela, 15706, Galicia ; Snchez, Manuel; Mosquera, Javier; Luna-Vega, Vctor; Corts, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, lvaro; Grupo de Imaxe Molecular, Instituto de Investigacin Sanitarias , Santiago de Compostela, 15706, Galicia; Fundacin Tejerina, 28003, Madrid
2014-05-15
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Monte Carlo studies of medium-size telescope designs for the Cherenkov Telescope Array
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wood, M. D.; Jogler, T.; Dumm, J.; Funk, S.
2015-06-07
In this paper, we present studies for optimizing the next generation of ground-based imaging atmospheric Cherenkov telescopes (IACTs). Results focus on mid-sized telescopes (MSTs) for CTA, detecting very high energy gamma rays in the energy range from a few hundred GeV to a few tens of TeV. We describe a novel, flexible detector Monte Carlo package, FAST (FAst Simulation for imaging air cherenkov Telescopes), that we use to simulate different array and telescope designs. The simulation is somewhat simplified to allow for efficient exploration over a large telescope design parameter space. We investigate a wide range of telescope performance parametersmore » including optical resolution, camera pixel size, and light collection area. In order to ensure a comparison of the arrays at their maximum sensitivity, we analyze the simulations with the most sensitive techniques used in the field, such as maximum likelihood template reconstruction and boosted decision trees for background rejection. Choosing telescope design parameters representative of the proposed Davies–Cotton (DC) and Schwarzchild–Couder (SC) MST designs, we compare the performance of the arrays by examining the gamma-ray angular resolution and differential point-source sensitivity. We further investigate the array performance under a wide range of conditions, determining the impact of the number of telescopes, telescope separation, night sky background, and geomagnetic field. We find a 30–40% improvement in the gamma-ray angular resolution at all energies when comparing arrays with an equal number of SC and DC telescopes, significantly enhancing point-source sensitivity in the MST energy range. Finally, we attribute the increase in point-source sensitivity to the improved optical point-spread function and smaller pixel size of the SC telescope design.« less
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
Santana Palacio, Juan A.; Krogel, Jaron T.; Kim, Jeongnim; Kent, Paul R.; Reboredo, Fernando A.
2015-04-28
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy as a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Santana Palacio, Juan A.; Krogel, Jaron T.; Kim, Jeongnim; Kent, Paul R.; Reboredo, Fernando A.
2015-04-28
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy asmore » a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.« less
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martnez-Rovira, I.; Prezado, Y.
2014-06-15
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
SU-E-T-323: The FLUKA Monte Carlo Code in Ion Beam Therapy
Rinaldi, I
2014-06-01
Purpose: Monte Carlo (MC) codes are increasingly used in the ion beam therapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code demands accurate and reliable physical models for the transport and the interaction of all components of the mixed radiation field. This contribution will address an overview of the recent developments in the FLUKA code oriented to its application in ion beam therapy. Methods: FLUKA is a general purpose MC code which allows the calculations of particle transport and interactions with matter, covering an extended range of applications. The user can manage the code through a graphic interface (FLAIR) developed using the Python programming language. Results: This contribution will present recent refinements in the description of the ionization processes and comparisons between FLUKA results and experimental data of ion beam therapy facilities. Moreover, several validations of the largely improved FLUKA nuclear models for imaging application to treatment monitoring will be shown. The complex calculation of prompt gamma ray emission compares favorably with experimental data and can be considered adequate for the intended applications. New features in the modeling of proton induced nuclear interactions also provide reliable cross section predictions for the production of radionuclides. Of great interest for the community are the developments introduced in FLAIR. The most recent efforts concern the capability of importing computed-tomography images in order to build automatically patient geometries and the implementation of different types of existing positron-emission-tomography scanner devices for imaging applications. Conclusion: The FLUA code has been already chosen as reference MC code in many ion beam therapy centers, and is being continuously improved in order to match the needs of ion beam therapy applications. Parts of this work have been supported by the European
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Chibani, O; Price, R; Ma, C; Eldib, A; Mora, G
2014-06-01
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
Energy Science and Technology Software Center (OSTI)
2013-05-06
Set of scripts (Python and Bash) to help users configure, run, and benchmark Hadoop clusters on ORNL computing infrastructure.
Wu, May; Zhang, Zhonglong
2015-09-01
clearly attributable to the conversion of a large amount of land to switchgrass. The Middle Lower Missouri River and Lower Missouri River were identified as hot regions. Further analysis identified four subbasins (10240002, 10230007, 10290402, and 10300200) as being the most vulnerable in terms of sediment, nitrogen, and phosphorus loadings. Overall, results suggest that increasing the amount of switchgrass acreage in the hot spots should be considered to mitigate the nutrient loads. The study provides an analytical method to support stakeholders in making informed decisions that balance biofuel production and water sustainability.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 34, 5 5, and 2 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two
A novel approach in electron beam radiation therapy of lips carcinoma: A Monte Carlo study
Shokrani, Parvaneh; Baradaran-Ghahfarokhi, Milad; Zadeh, Maryam Khorami
2013-04-15
Purpose: Squamous cell carcinoma (SCC) is commonly treated by electron beam radiotherapy (EBRT) followed by a boost via brachytherapy. Considering the limitations associated with brachytherapy, in this study, a novel boosting technique in EBRT of lip carcinoma using an internal shield as an internal dose enhancer tool (IDET) was evaluated. An IDET is referred to a partially covered internal shield located behind the lip. It was intended to show that while the backscattered electrons are absorbed in the portion covered with a low atomic number material, they will enhance the target dose in the uncovered area. Methods: Monte-Carlo models of 6 and 8 MeV electron beams were developed using BEAMnrc code and were validated against experimental measurements. Using the developed models, dose distributions in a lip phantom were calculated and the effect of an IDET on target dose enhancement was evaluated. Typical lip thicknesses of 1.5 and 2.0 cm were considered. A 5 Multiplication-Sign 5 cm{sup 2} of lead covered by 0.5 cm of polystyrene was used as an internal shield, while a 4 Multiplication-Sign 4 cm{sup 2} uncovered area of the shield was used as the dose enhancer. Results: Using the IDET, the maximum dose enhancement as a percentage of dose at d{sub max} of the unshielded field was 157.6% and 136.1% for 6 and 8 MeV beams, respectively. The best outcome was achieved for lip thickness of 1.5 cm and target thickness of less than 0.8 cm. For lateral dose coverage of planning target volume, the 80% isodose curve at the lip-IDET interface showed a 1.2 cm expansion, compared to the unshielded field. Conclusions: This study showed that a boost concomitant EBRT of lip is possible by modifying an internal shield into an IDET. This boosting method is especially applicable to cases in which brachytherapy faces limitations, such as small thicknesses of lips and targets located at the buccal surface of the lip.
Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT
Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S.
2012-08-15
This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.
Qin, Z.; Shoesmith, D.W.
2007-07-01
Based on a probabilistic model previously proposed, a Monte Carlo simulation code (EBSPA) has been developed to predict the lifetime of the engineered barriers system within the Yucca Mountain nuclear waste repository. The degradation modes considered in the EBSPA are general passive corrosion and hydrogen-induced cracking for the drip shield; and general passive corrosion, crevice corrosion and stress corrosion cracking for the waste package. Two scenarios have been simulated using the EBSPA code: (a) a conservative scenario for the conditions thought likely to prevail in the repository, and (b) an aggressive scenario in which the impact of the degradation processes is overstated. (authors)
Rodrigues, Anna; Yin, Fang-Fang; Wu, Qiuwen; Sawkey, Daren
2015-05-15
Purpose: To develop a framework for accurate electron Monte Carlo dose calculation. In this study, comprehensive validations of vendor provided electron beam phase space files for Varian TrueBeam Linacs against measurement data are presented. Methods: In this framework, the Monte Carlo generated phase space files were provided by the vendor and used as input to the downstream plan-specific simulations including jaws, electron applicators, and water phantom computed in the EGSnrc environment. The phase space files were generated based on open field commissioning data. A subset of electron energies of 6, 9, 12, 16, and 20 MeV and open and collimated field sizes 3 × 3, 4 × 4, 5 × 5, 6 × 6, 10 × 10, 15 × 15, 20 × 20, and 25 × 25 cm{sup 2} were evaluated. Measurements acquired with a CC13 cylindrical ionization chamber and electron diode detector and simulations from this framework were compared for a water phantom geometry. The evaluation metrics include percent depth dose, orthogonal and diagonal profiles at depths R{sub 100}, R{sub 50}, R{sub p}, and R{sub p+} for standard and extended source-to-surface distances (SSD), as well as cone and cut-out output factors. Results: Agreement for the percent depth dose and orthogonal profiles between measurement and Monte Carlo was generally within 2% or 1 mm. The largest discrepancies were observed within depths of 5 mm from phantom surface. Differences in field size, penumbra, and flatness for the orthogonal profiles at depths R{sub 100}, R{sub 50}, and R{sub p} were within 1 mm, 1 mm, and 2%, respectively. Orthogonal profiles at SSDs of 100 and 120 cm showed the same level of agreement. Cone and cut-out output factors agreed well with maximum differences within 2.5% for 6 MeV and 1% for all other energies. Cone output factors at extended SSDs of 105, 110, 115, and 120 cm exhibited similar levels of agreement. Conclusions: We have presented a Monte Carlo simulation framework for electron beam dose calculations for
Avila, Olga; Brandan, Maria-Ester
1998-08-28
A theoretical investigation of thermoluminescence response of Lithium Fluoride after heavy ion irradiation has been performed through Monte Carlo simulation of the energy deposition process. Efficiencies for the total TL signal of LiF irradiated with 0.7, 1.5 and 3 MeV protons and 3, 5.3 and 7.5 MeV helium ions have been calculated using the radial dose distribution profiles obtained from the MC procedure and applying Track Structure Theory and Modified Track Structure Theory. Results were compared with recent experimental data. The models correctly describe the observed decrease in efficiency as a function of the ion LET.
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-01-01
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.; Hood, Randolph Q.
2016-03-28
Diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules were performed, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. We suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M.; Godoy, W.; Maidana, N. L.; Vanin, V. R.
2013-05-06
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project
Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Gray, Michael A.; Andersen, Gary L.; Mormile, Melanie R.
2014-10-07
Coral disease is one of the major causes of reef degradation. Dark Spot Syndrome (DSS) was described in the early 1990's as brown or purple amorphous areas of tissue on a coral and has since become one of the most prevalent diseases reported on Caribbean reefs. It has been identified in a number of coral species, but there is debate as to whether it is in fact the same disease in different corals. Further, it is questioned whether these macroscopic signs are in fact diagnostic of an infectious disease at all. The most commonly affected species in the Caribbean is the massive starlet coral Siderastrea siderea. We sampled this species in two locations, Dry Tortugas National Park and Virgin Islands National Park. Tissue biopsies were collected from both healthy colonies and those with dark spot lesions. Microbial-community DNA was extracted from coral samples (mucus, tissue, and skeleton), amplified using bacterial-specific primers, and applied to PhyloChip G3 microarrays to examine the bacterial diversity associated with this coral. Samples were also screened for the presence of a fungal ribotype that has recently been implicated as a causative agent of DSS in another coral species, but the amplifications were unsuccessful. S. siderea samples did not cluster consistently based on health state (i.e., normal versus dark spot). Various bacteria, including Cyanobacteria and Vibrios, were observed to have increased relative abundance in the discolored tissue, but the patterns were not consistent across all DSS samples. Overall, our findings do not support the hypothesis that DSS in S. siderea is linked to a bacterial pathogen or pathogens. This dataset provides the most comprehensive overview to date of the bacterial community associated with the scleractinian coral S. siderea.
Sorokin, A. A.; Gottwald, A.; Hoehl, A.; Kroth, U.; Schoeppe, H.; Ulm, G.; Richter, M.; Bobashev, S. V.; Domracheva, I. V.; Smirnov, D. N.; Tiedtke, K.; Duesterer, S.; Feldhaus, J.; Hahn, U.; Jastrow, U.; Kuhlmann, M.; Nunez, T.; Ploenjes, E.; Treusch, R.
2006-11-27
A method has been developed and applied to measure the beam waist and spot size of a focused soft x-ray beam at the free-electron laser FLASH of the Deutsches Elektronen-Synchrotron in Hamburg. The method is based on a saturation effect upon atomic photoionization and represents an indestructible tool for the characterization of powerful beams of ionizing electromagnetic radiation. At the microfocus beamline BL2 at FLASH, a full width at half maximum focus diameter of (15{+-}2) {mu}m was determined.
Hardiansyah, D.; Haryanto, F.; Male, S.
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
Harding, R.; Trnková, P.; Lomax, A. J.; Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P.; Short, S. C.; Loughrey, C.; Thwaites, D. I.
2014-11-01
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.
Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R.; Rogers, D. W. O.
2014-11-01
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Rota, R.; Casulleras, J.; Mazzanti, F.; Boronat, J.
2015-03-21
We present a method based on the path integral Monte Carlo formalism for the calculation of ground-state time correlation functions in quantum systems. The key point of the method is the consideration of time as a complex variable whose phase δ acts as an adjustable parameter. By using high-order approximations for the quantum propagator, it is possible to obtain Monte Carlo data all the way from purely imaginary time to δ values near the limit of real time. As a consequence, it is possible to infer accurately the spectral functions using simple inversion algorithms. We test this approach in the calculation of the dynamic structure function S(q, ω) of two one-dimensional model systems, harmonic and quartic oscillators, for which S(q, ω) can be exactly calculated. We notice a clear improvement in the calculation of the dynamic response with respect to the common approach based on the inverse Laplace transform of the imaginary-time correlation function.
Jiang, F.-J.; Nyfeler, M.; Kaempfer, F.
2009-07-15
Motivated by the possible mechanism for the pinning of the electronic liquid crystal direction in YBa{sub 2}Cu{sub 3}O{sub 6.45} as proposed by Pardini et al. [Phys. Rev. B 78, 024439 (2008)], we use the first-principles Monte Carlo method to study the spin-(1/2) Heisenberg model with antiferromagnetic couplings J{sub 1} and J{sub 2} on the square lattice. In particular, the low-energy constants spin stiffness {rho}{sub s}, staggered magnetization M{sub s}, and spin wave velocity c are determined by fitting the Monte Carlo data to the predictions of magnon chiral perturbation theory. Further, the spin stiffnesses {rho}{sub s1} and {rho}{sub s2} as a function of the ratio J{sub 2}/J{sub 1} of the couplings are investigated in detail. Although we find a good agreement between our results with those obtained by the series expansion method in the weakly anisotropic regime, for strong anisotropy we observe discrepancies.
Zhang, C.Q. Robson, J.D.; Ciuca, O.; Prangnell, P.B.
2014-11-15
Aluminum alloy AA6111 and TiAl6V4 dissimilar alloys were successfully welded by high power ultrasonic spot welding. No visible intermetallic reaction layer was detected in as-welded AA6111/TiAl6V4 welds, even when transmission electron microscopy was used. The effects of welding time and natural aging on peak load and fracture energy were investigated. The peak load and fracture energy of welds increased with an increase in welding time and then reached a plateau. The lap shear strength (peak load) can reach the same level as that of similar Al–Al joints. After natural aging, the fracture mode of welds transferred from ductile fracture of the softened aluminum to interfacial failure due to the strength recovery of AA6111. - Highlights: • Dissimilar Al/Ti welds were produced by high power ultrasonic spot welding. • No visible intermetallic reaction layer was detected on weld interface. • The lap shear strength can reach the same level as that of similar Al–Al joints. • The fracture mode becomes interfacial failure after natural aging.
Faught, A; Davidson, S; Kry, S; Ibbott, G; Followill, D; Fontenot, J; Etzel, C
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 4040cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 33cm2 to 3030cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a quality
SU-D-19A-03: Monte Carlo Investigation of the Mobetron to Perform Modulated Electron Beam Therapy
Emam, I; Eldib, A; Hosini, M; AlSaeed, E; Ma, C
2014-06-01
Purpose: Modulated electron radiotherapy (MERT) has been proposed as a mean of delivering conformal dose to shallow tumors while sparing distal structures and surrounding tissues. In intraoperative radiotherapy (IORT) utilizing Mobetron, an applicator is placed as closely as possible to the suspected cancerous tissues to be treated. In this study we investigate the characteristics of Mobetron electron beams collimated by an in-house prospective electron multileaf collimator (eMLC) and its feasibility for MERT. Methods: IntraOp Mobetron dedicated to perform radiotherapy during surgery was used in the study. It provides several energies (6, 9 and 12 MeV). Dosimetry measurements were performed to obtain percentage depth dose curves (PDD) and profiles for a 10-cm diameter applicator using the PTW MP3/XS 3D-scanning system and the semiflex ion chamber. MCBEAM/MCSIM Monte Carlo codes were used for the treatment head simulation and phantom dose calculation. The design of an electron beam collimation by an eMLC attached to the Mobetron head was also investigated using Monte Carlo simulations. Isodose distributions resulting from eMLC collimated beams were compared to that collimated using cutouts. The design for our Mobetron eMLC is based on our previous experiences with eMLCs designed for clinical linear accelerators. For Mobetron the eMLC is attached to the end of a spacer-mounted rectangular applicator at 50 cm SSD. Steel will be used as the leaf material because other materials would be toxic and will not be suitable for intraoperative applications. Results: Good agreement (within 2%) was achieved between measured and calculated PDD curves and profiles for all available energies. Dose distributiosn provided by the eMLC showed reasonable agreement (?3%/1mm) with those obtained by conventional cutouts. Conclusion: Monte Carlo simulations are capable of modeling Mobetron electron beams with a reliable accuracy. An eMLC attached to the Mobteron treatment head will allow
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Sarrut, David; Universit Lyon 1; Centre Lon Brard ; Bardis, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Ltang, Jean-Michel; Jan, Sbastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain
2009-01-05
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Looking for Auger signatures in III-nitride light emitters: A full-band Monte Carlo perspective
Bertazzi, Francesco Goano, Michele; Zhou, Xiangyu; Calciati, Marco; Ghione, Giovanni; Matsubara, Masahiko; Bellotti, Enrico
2015-02-09
Recent experiments of electron emission spectroscopy (EES) on III-nitride light-emitting diodes (LEDs) have shown a correlation between droop onset and hot electron emission at the cesiated surface of the LED p-cap. The observed hot electrons have been interpreted as a direct signature of Auger recombination in the LED active region, as highly energetic Auger-excited electrons would be collected in long-lived satellite valleys of the conduction band so that they would not decay on their journey to the surface across the highly doped p-contact layer. We discuss this interpretation by using a full-band Monte Carlo model based on first-principles electronic structure and lattice dynamics calculations. The results of our analysis suggest that Auger-excited electrons cannot be unambiguously detected in the LED structures used in the EES experiments. Additional experimental and simulative work are necessary to unravel the complex physics of GaN cesiated surfaces.
Conlin, Jeremy Lloyd; Tobin, Stephen J
2010-10-13
There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.
Quantum Monte Carlo Study of the Ground-State Properties of a Fermi Gas in the BCS-BEC Crossover
Giorgini, S.; Astrakharchik, G. E.; Boronat, J.; Casulleras, J.
2006-11-07
The ground-state properties of a two-component Fermi gas with attractive short-range interactions are calculated using the fixed-node diffusion Monte Carlo method. The interaction strength is varied over a wide range by tuning the value of the s-wave scattering length of the two-body potential. We calculate the ground-state energy per particle and we characterize the equation of state of the system. Off-diagonal long-range order is investigated through the asymptotic behavior of the two-body density matrix. The condensate fraction of pairs is calculated in the unitary limit and on both sides of the BCS-BEC crossover.
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Leon, Stephanie M. Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.280.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid
Böcklin, Christoph Baumann, Dirk; Fröhlich, Jürg
2014-02-14
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Zink, K.; Czarnecki, D.; Voigts-Rhetz, P. von; Looe, H. K.; Harder, D.
2014-11-01
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known inscattering effect, whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the inout balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the inout balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time
Cashmore, Jason; Golubev, Sergey; Dumont, Jose Luis; Sikora, Marcin; Alber, Markus; Ramtohul, Mark
2012-06-15
Purpose: A linac delivering intensity-modulated radiotherapy (IMRT) can benefit from a flattening filter free (FFF) design which offers higher dose rates and reduced accelerator head scatter than for conventional (flattened) delivery. This reduction in scatter simplifies beam modeling, and combining a Monte Carlo dose engine with a FFF accelerator could potentially increase dose calculation accuracy. The objective of this work was to model a FFF machine using an adapted version of a previously published virtual source model (VSM) for Monte Carlo calculations and to verify its accuracy. Methods: An Elekta Synergy linear accelerator operating at 6 MV has been modified to enable irradiation both with and without the flattening filter (FF). The VSM has been incorporated into a commercially available treatment planning system (Monaco Trade-Mark-Sign v 3.1) as VSM 1.6. Dosimetric data were measured to commission the treatment planning system (TPS) and the VSM adapted to account for the lack of angular differential absorption and general beam hardening. The model was then tested using standard water phantom measurements and also by creating IMRT plans for a range of clinical cases. Results: The results show that the VSM implementation handles the FFF beams very well, with an uncertainty between measurement and calculation of <1% which is comparable to conventional flattened beams. All IMRT beams passed standard quality assurance tests with >95% of all points passing gamma analysis ({gamma} < 1) using a 3%/3 mm tolerance. Conclusions: The virtual source model for flattened beams was successfully adapted to a flattening filter free beam production. Water phantom and patient specific QA measurements show excellent results, and comparisons of IMRT plans generated in conventional and FFF mode are underway to assess dosimetric uncertainties and possible improvements in dose calculation and delivery.
TH-A-19A-10: Fast Four Dimensional Monte Carlo Dose Computations for Proton Therapy of Lung Cancer
Mirkovic, D; Titt, U; Mohan, R; Yepes, P
2014-06-15
Purpose: To develop and validate a fast and accurate four dimensional (4D) Monte Carlo (MC) dose computation system for proton therapy of lung cancer and other thoracic and abdominal malignancies in which the delivered dose distributions can be affected by respiratory motion of the patient. Methods: A 4D computer tomography (CT) scan for a lung cancer patient treated with protons in our clinic was used to create a time dependent patient model using our in-house, MCNPX-based Monte Carlo system (“MC{sup 2}”). The beam line configurations for two passively scattered proton beams used in the actual treatment were extracted from the clinical treatment plan and a set of input files was created automatically using MC{sup 2}. A full MC simulation of the beam line was computed using MCNPX and a set of phase space files for each beam was collected at the distal surface of the range compensator. The particles from these phase space files were transported through the 10 voxelized patient models corresponding to the 10 phases of the breathing cycle in the 4DCT, using MCNPX and an accelerated (fast) MC code called “FDC”, developed by us and which is based on the track repeating algorithm. The accuracy of the fast algorithm was assessed by comparing the two time dependent dose distributions. Results: The error of less than 1% in 100% of the voxels in all phases of the breathing cycle was achieved using this method with a speedup of more than 1000 times. Conclusion: The proposed method, which uses full MC to simulate the beam line and the accelerated MC code FDC for the time consuming particle transport inside the complex, time dependent, geometry of the patient shows excellent accuracy together with an extraordinary speed.
Morton, April M; McManamay, Ryan A; Nagle, Nicholas N; Piburn, Jesse O; Stewart, Robert N; Surendran Nair, Sujithkumar
2016-01-01
Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
Di, Salvio A; Bedwani, S; Carrier, J
2014-06-15
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Tracking in full Monte Carlo detector simulations of 500 GeV e{sup +}e{sup {minus}} collisions
Ronan, M.T.
2000-03-01
In full Monte Carlo simulation models of future Linear Collider detectors, charged tracks are reconstructed from 3D space points in central tracking detectors. The track reconstruction software is being developed for detailed physics studies that take realistic detector resolution and background modeling into account. At this stage of the analysis, reference tracking efficiency and resolutions for ideal detector conditions are presented. High performance detectors are being designed to carry out precision studies of e{sup +}e{sup {minus}} annihilation events in the energy range of 500 GeV to 1.5 TeV. Physics processes under study include Higgs mass and branching ratio measurements, measurement of possible manifestations of Supersymmetry (SUSY), precision Electro-Weak (EW) studies and searches for new phenomena beyond their current expectations. The relatively-low background machine environment at future Linear Colliders will allow precise measurements if proper consideration is given to the effects of the backgrounds on these studies. In current North American design studies, full Monte Carlo detector simulation and analysis is being used to allow detector optimization taking into account realistic models of machine backgrounds. In this paper the design of tracking software that is being developed for full detector reconstruction is discussed. In this study, charged tracks are found from simulated space point hits allowing for the straight-forward addition of background hits and for the accounting of missing information. The status of the software development effort is quantified by some reference performance measures, which will be modified by future work to include background effects.
Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.; Tenreiro, Claudio
2010-10-15
Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters of AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.