Chambers County, Texas: Energy Resources | Open Energy Information
County, Texas Reliant Baytown Biomass Facility Places in Chambers County, Texas Anahuac, Texas Baytown, Texas Beach City, Texas Cove, Texas Mont Belvieu, Texas Old...
STATEMENT OF MELANIE KENDERDINE DIRECTOR OF THE OFFICE OF ENERGY...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
2.97gal above the price at Mont Belvieu. This differential sent a strong signal to producers and distributors, and market participants responded by moving additional supplies...
Vilim, R.B.
1985-08-01
The principle methods for performing reactor hot spot analysis are reviewed and examined for potential use in the Applied Physics Division. The semistatistical horizontal method is recommended for future work and is now available as an option in the SE2-ANL core thermal hydraulic code. The semistatistical horizontal method is applied to a small LMR to illustrate the calculation of cladding midwall and fuel centerline hot spot temperatures. The example includes a listing of uncertainties, estimates for their magnitudes, computation of hot spot subfactor values and calculation of two sigma temperatures. A review of the uncertainties that affect liquid metal fast reactors is also presented. It was found that hot spot subfactor magnitudes are strongly dependent on the reactor design and therefore reactor specific details must be carefully studied. 13 refs., 1 fig., 5 tabs.
Energy Science and Technology Software Center (OSTI)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Sensor Placement + Optimization Software (SPOT) | Open Energy...
modeling tools User Interface: Spreadsheet Website: www.archenergy.comSPOT Cost: Free Language: English References: http:www.archenergy.comSPOT SPOT(tm) is intended to...
U.S. Energy Information Administration (EIA)
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
recent price increase may be due to the nearing phase 1 completion of the Targa Galena Park Terminal on the Houston Ship Channel near Mont Belvieu. The terminal's nameplate export...
New construction era reflected in East Texas LPG pipeline
Mittler, T.J. )
1990-04-02
Installation of 240 miles of 6, 10, and 12-in. LPG pipelines from Mont Belvieu to Tyler, Tex., has provided greater feedstock-supply flexibility to a petrochemical plant in Longview, Tex. The project, which took place over 18 months, included tie-ins with metering at four Mont Belvieu suppliers. The new 10 and 12-in. pipelines now transport propane while the new and existing parts of a 6-in. pipeline transport propylene.
Energy Science and Technology Software Center (OSTI)
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
ARM - Datastreams - aosaeth1spot
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Datastreamsaosaeth1spot Documentation Data Quality Plots Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Datastream : AOSAETH1SPOT Single spot Aethalometer® Active Dates 2014.01.21 - 2016.05.12 Measurement Categories Aerosols, Atmospheric Carbon Originating Instrument Aethalometer (AETH) Measurements Only measurements considered scientifically relevant are shown below by default. Show all measurements Measurement Units Variable Altitude above
Hot Spot | Open Energy Information
definitions:Wikipedia Reegle Tectonic Settings List of tectonic settings known to host modern geothermal systems: Extensional Tectonics Subduction Zone Rift Zone Hot Spot...
SPOT Suite Transforms Beamline Science
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
SPOT Suite Transforms Beamline Science SPOT Suite Transforms Beamline Science SPOT Suite brings advanced algorithms, high performance computing and data management to the masses August 18, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov als.jpg Advanced Light Source (ALS) at Berkeley Lab (Photo by Roy Kaltschmidt) Some mysteries of science can only be explained on a nanometer scale -even smaller than a single strand of human DNA, which is about 2.5 nanometers wide. At this scale, scientists
ClearSpot Energy | Open Energy Information
ClearSpot Energy Jump to: navigation, search Name: ClearSpot Energy Sector: Solar Product: US-based solar project developer for rooftop commercial installations. References:...
ARM - Datastreams - aosaeth2spot
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Datastreamsaosaeth2spot Documentation Data Quality Plots ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Datastream : AOSAETH2SPOT Definition needed Active Dates 2015.09.30 - 2016.05.12 Measurement Categories Aerosols, Atmospheric Carbon Originating Instrument Aethalometer (AETH) Measurements Only measurements considered scientifically relevant are shown below by default. Show all measurements Measurement Units
HotSpot | Department of Energy
HotSpot HotSpot Current Central Registry Toolbox Version(s): 2.07.1 Code Owner: Department of Energy, Office of Emergency Operations and Lawrence Livermore National Laboratory (LLNL) Description: The HotSpot Health Physics Code is used for safety-analysis of DOE facilities handling nuclear material. Additionally, HotSpot provides emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. HotSpot
A procedure to determine the planar integral spot dose values of proton pencil beam spots
Anand, Aman; Sahoo, Narayan; Zhu, X. Ronald; Sawakuchi, Gabriel O.; Poenisch, Falk; Amos, Richard A.; Ciangaru, George; Titt, Uwe; Suzuki, Kazumichi; Mohan, Radhe; Gillin, Michael T.
2012-02-15
Purpose: Planar integral spot dose (PISD) of proton pencil beam spots (PPBSs) is a required input parameter for beam modeling in some treatment planning systems used in proton therapy clinics. The measurement of PISD by using commercially available large area ionization chambers, like the PTW Bragg peak chamber (BPC), can have large uncertainties due to the size limitation of these chambers. This paper reports the results of our study of a novel method to determine PISD values from the measured lateral dose profiles and peak dose of the PPBS. Methods: The PISDs of 72.5, 89.6, 146.9, 181.1, and 221.8 MeV energy PPBSs were determined by area integration of their planar dose distributions at different depths in water. The lateral relative dose profiles of the PPBSs at selected depths were measured by using small volume ion chambers and were investigated for their angular anisotropies using Kodak XV films. The peak spot dose along the beam's central axis (D{sub 0}) was determined by placing a small volume ion chamber at the center of a broad field created by the superposition of spots at different locations. This method allows eliminating positioning uncertainties and the detector size effect that could occur when measuring it in single PPBS. The PISD was then calculated by integrating the measured lateral relative dose profiles for two different upper limits of integration and then multiplying it with corresponding D{sub 0}. The first limit of integration was set to radius of the BPC, namely 4.08 cm, giving PISD{sub RBPC}. The second limit was set to a value of the radial distance where the profile dose falls below 0.1% of the peak giving the PISD{sub full}. The calculated values of PISD{sub RBPC} obtained from area integration method were compared with the BPC measured values. Long tail dose correction factors (LTDCFs) were determined from the ratio of PISD{sub full}/PISD{sub RBPC} at different depths for PPBSs of different energies. Results: The spot profiles were found to have angular anisotropy. This anisotropy in PPBS dose distribution could be accounted in a reasonable approximate manner by taking the average of PISD values obtained using the in-line and cross-line profiles. The PISD{sub RBPC} values fall within 3.5% of those measured by BPC. Due to inherent dosimetry challenges associated with PPBS dosimetry, which can lead to large experimental uncertainties, such an agreement is considered to be satisfactory for validation purposes. The PISD{sub full} values show differences ranging from 1 to 11% from BPC measured values, which are mainly due to the size limitation of the BPC to account for the dose in the long tail regions of the spots extending beyond its 4.08 cm radius. The dose in long tail regions occur both for high energy beams such as 221.8 MeV PPBS due to the contributions of nuclear interactions products in the medium, and for low energy PPBS because of their larger spot sizes. The calculated LTDCF values agree within 1% with those determined by the Monte Carlo (MC) simulations. Conclusions: The area integration method to compute the PISD from PPBS lateral dose profiles is found to be useful both to determine the correction factors for the values measured by the BPC and to validate the results from MC simulations.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Quantum Monte Carlo for the Electronic Structure of Atoms and Molecules Brian Austin Lester Group, U.C. Berkeley BES Requirements Workshop Rockville, MD February 9, 2010 Outline Applying QMC to diverse chemical systems Select systems with high interest and impact Phenol: bond dissociation energy Retinal: excitation energy Algorithmic details Parallel Strategy Wave function evaluation O-H Bond Dissociation Energy of Phenol Ph-OH Ph-O * + H * (36 valence electrons)
HotSpot Software Configuration Management Plan
Walker, H; Homann, S G
2009-03-12
This Software Configuration Management Plan (SCMP) describes the software configuration management procedures used to ensure that the HotSpot dispersion model meets the requirements of its user base, which includes: (1) Users of the PC version of HotSpot for consequence assessment, hazard assessment and safety analysis calculations; and (2) Users of the NARAC Web and iClient software tools, which allow users to run HotSpot for consequence assessment modeling These users and sponsors of the HotSpot software and the organizations they represent constitute the intended audience for this document. This plan is intended to meet Critical Recommendations 1 and 3 from the Software Evaluation of HotSpot and DOE Safety Software Toolbox Recommendation for inclusion of HotSpot in the Department of Energy (DOE) Safety Software Toolbox. HotSpot software is maintained for the Department of Energy Office of Emergency Operations by the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL). An overview of HotSpot and NARAC are provided.
Energy Science and Technology Software Center (OSTI)
2007-07-26
The TEVA-SPOT Toolkit (SPOT) supports the design of contaminant warning systems (CWSs) that use real-time sensors to detect contaminants in municipal water distribution networks. Specifically, SPOT provides the capability to select the locations for installing sensors in order to maximize the utility and effectiveness of the CWS. SPOT models the sensor placement process as an optimization problem, and the user can specify a wide range of performance objectives for contaminant warning system design, including populationmore » health effects, time to detection, extent of contamination, volume consumed and number of failed detections. For example, a SPOT user can integrate expert knowledge during the design process by specigying required sensor placements or designating network locations as forbidden. Further, cost considerations can be integrated by limiting the design with user-specified installation costs at each location.« less
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Eolica Montes de Cierzo | Open Energy Information
Montes de Cierzo Jump to: navigation, search Name: Eolica Montes de Cierzo Place: Navarra, Spain Sector: Wind energy Product: Spanish wind farm developer in the region of Navarra....
Investigations of initiation spot size effects
Clarke, Steven A; Akinci, Adrian A; Leichty, Gary; Schaffer, Timothy; Murphy, Michael J; Munger, Alan; Thomas, Keith A
2010-01-01
As explosive components become smaller, a greater understanding of the effect of initiation spot size on detonation becomes increasingly critical. A series of tests of the effect of initiation spot size will be described. A series of DOI (direct optical initiation) detonators with initiation spots sizes from {approx}50 um to 1000um have been tested to determine laser parameters for threshold firing of low density PETN pressings. Results will be compared with theoretical predictions. Outputs of the initiation source (DOI ablation) have been characterized by a suite of diagnostics including PDV and schlieren imaging. Outputs of complete detonators have been characterized using PDV, streak, and/or schlieren imaging. At present, we have not found the expected change in the threshold energy to spot size relationship for DOI type detonators found in similar earlier for projectiles, slappers and EBWs. New detonators designs (Type C) are currently being tested that will allow the determination of the threshold for spot sizes from 250 um to 105um, where we hope to see change in the threshold vs. spot size relationship. Also, one test of an extremely small diameter spot size (50um) has resulted in preliminary NoGo only results even at energy densities as much as 8 times the energy density of the threshold results presented here. This gives preliminary evidence that 50um spot may be beyond the critical initiation diameter. The constant threshold energy to spot size relationship in the data to date does however still give some insight into the initiation mechanism of DOI detonators. If the DOI initiation mechanism were a 1D mechanism similar to a slapper or a flyer impact, the expected inflection point in the graph would have been between 300um and 500um diameter spot size, within the range of the data presented here. The lack of that inflection point indicates that the DOI initiation mechanism is more likely a 2D mechanism similar to a sphere or rod projectile. We expect to see a three region response as the results from the smaller spot size Type C detonators are completed.
Energy Science and Technology Software Center (OSTI)
2010-03-02
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
Energy Science and Technology Software Center (OSTI)
2013-04-18
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating insidents involving redioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
Isotropic Monte Carlo Grain Growth
Energy Science and Technology Software Center (OSTI)
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
ATS Spotted MSI Analysis with Matlab
Energy Science and Technology Software Center (OSTI)
2012-02-09
Samples are placed on a surface using an acoustic transfer system (ATS). This results in one ore more small droplets on a surface. Typically there are hundreds to thousands of these droplets arrayed in a regular coordinate system. The surface is analyzed using mass spectrometry imaging (MSI) and at each position, one or more mass spectra are recorded. The purpose of the software is to help the user assign locations to the spots and buildmore » a report for each spot.« less
Mont Vista Capital LLC | Open Energy Information
Vista Capital LLC Jump to: navigation, search Name: Mont Vista Capital LLC Place: New York, New York Zip: 10167 Sector: Services Product: Mont Vista Capital is a leading global...
Solar Renewable Energy Credits (SRECs) Spot Market Program
Broader source: Energy.gov [DOE]
NOTE: While interested parties can still trade DE SRECs in the spot market, the spot market in itself is limited since most of the SRECs produced are part of the SREC Purchase Program, or the SREC...
Friction Stir Spot Welding of Advanced High Strength Steels II...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Friction Stir Spot Welding of Advanced High Strength Steels II Friction Stir Spot Welding of Advanced High Strength Steels II 2011 DOE Hydrogen and Fuel Cells Program, and Vehicle ...
IR Spot Weld Inspect () | SciTech Connect
Office of Scientific and Technical Information (OSTI)
Software: IR Spot Weld Inspect Citation Details Software Request Title: IR Spot Weld Inspect In automotive industry, destructive inspection of spot welds is still the mandatory quality assurance method due to the lack of efficient non-destructive evaluation (NDE) tools. However, it is costly and time-consuming. Recently at ORNL, a new NDE prototype system for spot weld inspection using infrared (IR) thermography has been developed to address this problem. This software contains all the key
Friction Stir Spot Welding of Advanced High Strength Steels
Hovanski, Yuri; Grant, Glenn J.; Santella, M. L.
2009-11-13
Friction stir spot welding techniques were developed to successfully join several advanced high strength steels. Two distinct tool materials were evaluated to determine the effect of tool materials on the process parameters and joint properties. Welds were characterized primarily via lap shear, microhardness, and optical microscopy. Friction stir spot welds were compared to the resistance spot welds in similar strength alloys by using the AWS standard for resistance spot welding high strength steels. As further comparison, a primitive cost comparison between the two joining processes was developed, which included an evaluation of the future cost prospects of friction stir spot welding in advanced high strength steels.
Monte Carlo Simulations of APEX
Xu, G.
1995-10-01
Monte Carlo simulationsof the APEX apparatus, a spectrometer designed to meausre positron-electron pairs produced in heavy-ion collisions, carried out using GEANT are reported. The results of these simulations are compared with data from measurements of conversion electron, positron and part emitting sources as well as with the results of in-beam measurements of positrons and electrons. The overall description of the performance of the apparatus is excellent.
Energy Monte Carlo (EMCEE) | Open Energy Information
with a specific set of distributions. Both programs run as spreadsheet workbooks in Microsoft Excel. EMCEE and Emc2 require Crystal Ball, a commercially available Monte Carlo...
Hot spot-derived shock initiation phenomena in heterogeneous nitromethane
Office of Scientific and Technical Information (OSTI)
(Conference) | SciTech Connect Conference: Hot spot-derived shock initiation phenomena in heterogeneous nitromethane Citation Details In-Document Search Title: Hot spot-derived shock initiation phenomena in heterogeneous nitromethane The addition of solid silica particles to gelled nitromethane offers a tractable model system for interrogating the role of impedance mismatches as one type of hot spot 'seed' on the initiation behaviors of explosive formulations. Gas gun-driven plate impact
Friction Stir Spot Welding of DP780 Carbon Steel
Santella, M. L.; Hovanski, Yuri; Frederick, Alan; Grant, Glenn J.; Dahl, Michael E.
2009-09-15
Friction stir spot welds were made in uncoated and galvannneled DP780 sheets using polycrystalline boron nitride stir tools. The tools were plunged at either a single continuous rate or in two segments consisting of a relatively high rate followed by a slower rate of shorter depth. Welding times ranged from 1-10 s. Increasing tool rotation speed from 800 to 1600 rpm increased strength values. The 2-segment welding procedures also produced higher strength joints. Average lap-shear strengths exceeding 10.3 kN were consistently obtained in 4 s on both the uncoated and the galvannealed DP780. The likelihood of diffusion and mechanical interlocking contributing to bond formation was supported by metallographic examinations. A cost analysis based on spot welding in automobile assembly showed that for friction stir spot welding to be economically competitive with resistance spot welding the cost of stir tools must approach that of resistance spot welding electrode tips.
Jefferson Lab finds its man Mont (Inside Business) | Jefferson...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
https:www.jlab.orgnewsarticlesjefferson-lab-finds-its-man-mont-inside-business Jefferson Lab finds its man Mont Hugh Montgomery Hugh Montgomery, a British nuclear physicist...
Finite Cosmology and a CMB Cold Spot
Adler, R.J.; Bjorken, J.D.; Overduin, J.M.; /Stanford U., HEPL
2006-03-20
The standard cosmological model posits a spatially flat universe of infinite extent. However, no observation, even in principle, could verify that the matter extends to infinity. In this work we model the universe as a finite spherical ball of dust and dark energy, and obtain a lower limit estimate of its mass and present size: the mass is at least 5 x 10{sup 23}M{sub {circle_dot}} and the present radius is at least 50 Gly. If we are not too far from the dust-ball edge we might expect to see a cold spot in the cosmic microwave background, and there might be suppression of the low multipoles in the angular power spectrum. Thus the model may be testable, at least in principle. We also obtain and discuss the geometry exterior to the dust ball; it is Schwarzschild-de Sitter with a naked singularity, and provides an interesting picture of cosmogenesis. Finally we briefly sketch how radiation and inflation eras may be incorporated into the model.
Hot spot-ridge crest convergence in the northeast Pacific
Karsten, J.L.; Delaney, J.R. )
1989-01-10
Evolution of the Juan de Fuca Ridge during the past 7 m.y. has been reconstructed taking into account both the propagating rift history and migration of the spreading center in the 'absolute' (fixed hot spot) reference frame. Northwestward migration of the spreading center (at a rate of 30 km/m.y.) has resulted in progressive encroachment of the ridge axis on the Cobb Hot Spot and westward jumping of the central third of the ridge axis more recently than 0.5 Ma. Seamounts in the Cobb-Eickelberg chain are predicted to display systematic variations in morphology and petrology, and a reduction in the age contrast between the edifice and underlying crust, as a result of the ridge axis approach. Relative seamount volumes also indicate that magmatic output of the hot spot varied during this interval, with a reduction in activity between 2.5 and 4.5 Ma, compared with relatively more robust activity before and after this period. Spatial relationships determined in this reconstruction allow hypotheses relating hot spot activity and rift propagation to be evaluated. In most cases, rift propagation has been directed away from the hot spot during the time period considered. Individual propagators show some reduction in propagation rate as separation between the propagating rift tip and hot spot increases, but cross comparison of multiple propagators does not uniformly display the same relationship. No obvious correlation exists between propagation rate and increasing proximity of the hot spot to the ridge axis or increasing hot spot output. Taken together, these observations do not offer compelling support for the concept of hot spot driven rift propagation. However, short-term reversals in propagation direction at the Cobb Offset coincide with activity of the Heckle melting anomaly, suggesting that local propagation effects may be related to excess magma supply at the ridge axis.
Wall and laser spot motion in cylindrical hohlraums
Huser, G.; Courtois, C.; Monteil, M.-C.
2009-03-15
Wall and laser spot motion measurements in empty, propane-filled and plastic (CH)-lined gold coated cylindrical hohlraums were performed on the Omega laser facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Wall motion was measured using axial two-dimensional (2D) x-ray imaging and laser spot motion was perpendicularly observed through a thinned wall using streaked hard x-ray imaging. Experimental results and 2D hydrodynamic simulations show that while empty targets exhibit on-axis plasma collision, CH-lined and propane-filled targets inhibit wall expansion, corroborated with perpendicular streaked imaging showing a slower motion of laser spots.
Forecasting Crude Oil Spot Price Using OECD Petroleum Inventory Levels
Reports and Publications (EIA)
2003-01-01
This paper presents a short-term monthly forecasting model of West Texas Intermediate crude oil spot price using Organization for Economic Cooperation and Development (OECD) petroleum inventory levels.
Imager Spots and Samples Tiny Tumors | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Imager Spots and Samples Tiny Tumors Imager Spots and Samples Tiny Tumors NEWPORT NEWS, Va. Feb. 8, 2008 -- The positron emission mammography/tomography breast imaging and biopsy system was designed and constructed by scientists at Jefferson Lab, West Virginia University and the University of Maryland School of Medicine. The PEM/PET system is designed for detecting and guiding the biopsies of suspicious breast cancer lesions. "This is the most-important and most-difficult imager we've
Jefferson Lab Medical Imager Spots Breast Cancer | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Medical Imager Spots Breast Cancer PEM This PEM image shows two cancerous lesions. The one on the right was depicted by conventional mammography, but the one on the left was only identified by the PEM unit. Image courtesy: Eric Rosen, Duke University Medical Center Jefferson Lab Medical Imager Spots Breast Cancer March 3, 2005 Newport News, VA - A study published in the February issue of the journal Radiology shows that a positron emission mammography (PEM) device designed and built by Jefferson
Portsmouth Training Exercise Helps Radiological Trainees Spot Mistakes
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Safely | Department of Energy Portsmouth Training Exercise Helps Radiological Trainees Spot Mistakes Safely Portsmouth Training Exercise Helps Radiological Trainees Spot Mistakes Safely February 11, 2016 - 12:10pm Addthis Connie Martin performs work inside the Error Lab while trainees observe her actions for mistakes. Connie Martin performs work inside the Error Lab while trainees observe her actions for mistakes. Lorrie Graham (left) talks with trainees in a classroom setting before
Wang, Dongxu Dirksen, Blake; Hyer, Daniel E.; Buatti, John M.; Sheybani, Arshin; Dinges, Eric; Felderman, Nicole; TenNapel, Mindi; Bayouth, John E.; Flynn, Ryan T.
2014-12-15
Purpose: To determine the plan quality of proton spot scanning (SS) radiosurgery as a function of spot size (in-air sigma) in comparison to x-ray radiosurgery for treating peripheral brain lesions. Methods: Single-field optimized (SFO) proton SS plans with sigma ranging from 1 to 8 mm, cone-based x-ray radiosurgery (Cone), and x-ray volumetric modulated arc therapy (VMAT) plans were generated for 11 patients. Plans were evaluated using secondary cancer risk and brain necrosis normal tissue complication probability (NTCP). Results: For all patients, secondary cancer is a negligible risk compared to brain necrosis NTCP. Secondary cancer risk was lower in proton SS plans than in photon plans regardless of spot size (p = 0.001). Brain necrosis NTCP increased monotonically from an average of 2.34/100 (range 0.42/100–4.49/100) to 6.05/100 (range 1.38/100–11.6/100) as sigma increased from 1 to 8 mm, compared to the average of 6.01/100 (range 0.82/100–11.5/100) for Cone and 5.22/100 (range 1.37/100–8.00/100) for VMAT. An in-air sigma less than 4.3 mm was required for proton SS plans to reduce NTCP over photon techniques for the cohort of patients studied with statistical significance (p = 0.0186). Proton SS plans with in-air sigma larger than 7.1 mm had significantly greater brain necrosis NTCP than photon techniques (p = 0.0322). Conclusions: For treating peripheral brain lesions—where proton therapy would be expected to have the greatest depth-dose advantage over photon therapy—the lateral penumbra strongly impacts the SS plan quality relative to photon techniques: proton beamlet sigma at patient surface must be small (<7.1 mm for three-beam single-field optimized SS plans) in order to achieve comparable or smaller brain necrosis NTCP relative to photon radiosurgery techniques. Achieving such small in-air sigma values at low energy (<70 MeV) is a major technological challenge in commercially available proton therapy systems.
Friction Stir Spot Welding of DP780 Carbon Steel
Santella, Michael L [ORNL; Hovanski, Yuri [ORNL; Frederick, David Alan [ORNL; Grant, Glenn J [ORNL; Dahl, Michael E [ORNL
2010-01-01
Friction stir spot welds were made in uncoated and galvannealed DP780 sheets using polycrystalline boron nitride stir tools. The tools were plunged at either a single continuous rate or in two segments consisting of a relatively high rate followed by a slower rate of shorter depth. Welding times ranged from 1 to 10 s. Increasing tool rotation speed from 800 to 1600 rev min{sup -1} increased strength values. The 2-segment welding procedures also produced higher strength joints. Average lap shear strengths exceeding 10 {center_dot} 3 kN were consistently obtained in 4 s on both the uncoated and the galvannealed DP780. The likelihood of diffusion and mechanical interlocking contributing to bond formation was supported by metallographic examinations. A cost analysis based on spot welding in automobile assembly showed that for friction stir spot welding to be economically competitive with resistance spot welding the cost of stir tools must approach that of resistance spot welding electrode tips.
FRICTION STIR SPOT WELDING OF 6016 ALUMINUM ALLOY
Mishra, Rajiv S.; Webb, S.; Freeney, T. A.; Chen, Y. L.; Gayden, X.; Grant, Glenn J.; Herling, Darrell R.
2007-01-08
Friction stir spot welding (FSSW) of 6016 aluminum alloy was evaluated with conventional pin tool and new off-center feature tools. The off-center feature tool provides significant control over the joint area. The tool rotation rate was varied between 1000 and 2500 rpm. Maximum failure strength was observed in the tool rotation range of 1200-1500 rpm. The results are interpreted in the context of material flow in the joint and influence of thermal input on microstructural changes. The off-center feature tool concept opens up new possibilities for plunge-type friction stir spot welding.
REAL TIME ULTRASONIC ALUMINUM SPOT WELD MONITORING SYSTEM
Regalado, W. Perez; Chertov, A. M.; Maev, R. Gr. [Institute for Diagnostic Imaging Research, Physics Department, University of Windsor, 292 Essex Hall, 401 Sunset Ave. N9B 3P4 Windsor, Ontario (Canada)
2010-02-22
Aluminum alloys pose several properties that make them one of the most popular engineering materials: they have excellent corrosion resistance, and high weight-to-strength ratio. Resistance spot welding of aluminum alloys is widely used today but oxide film and aluminum thermal and electrical properties make spot welding a difficult task. Electrode degradation due to pitting, alloying and mushrooming decreases the weld quality and adjustment of parameters like current and force is required. To realize these adjustments and ensure weld quality, a tool to measure weld quality in real time is required. In this paper, a real time ultrasonic non-destructive evaluation system for aluminum spot welds is presented. The system is able to monitor nugget growth while the spot weld is being made. This is achieved by interpreting the echoes of an ultrasound transducer located in one of the welding electrodes. The transducer receives and transmits an ultrasound signal at different times during the welding cycle. Valuable information of the weld quality is embedded in this signal. The system is able to determine the weld nugget diameter by measuring the delays of the ultrasound signals received during the complete welding cycle. The article presents the system performance on aluminum alloy AA6022.
Dynamic Characterization of Spot Welds for AHSS | Department of Energy
Broader source: Energy.gov (indexed) [DOE]
11 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon lm025_feng_2011_o.pdf More Documents & Publications Dynamic Characterization of Spot Welds for AHSS Overview of Joining Activities in Lightweighting Materials FY 2009 Progress Report for Lightweighting Materials - 9. Joining
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-01-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems...
Office of Scientific and Technical Information (OSTI)
Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems Citation Details In-Document Search Title: Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems You ...
The Information Role of Spot Prices and Inventories
U.S. Energy Information Administration (EIA) Indexed Site
Information Role of Spot Prices and Inventories James L. Smith, Rex Thompson, and Thomas Lee June 24, 2014 Independent Statistics & Analysis www.eia.gov U.S. Energy Information Administration Washington, DC 20585 This paper is released to encourage discussion and critical comment. The analysis and conclusions expressed here are those of the authors and not necessarily those of the U.S. Energy Information Administration. WORKING PAPER SERIES June 2014 James L. Smith, Rex Thomas, and Thomas K.
Texas students win regional National Science Bowl competition, secure spot
National Nuclear Security Administration (NNSA)
in finals in nation's capital | National Nuclear Security Administration Texas students win regional National Science Bowl competition, secure spot in finals in nation's capital Monday, March 21, 2016 - 10:22am NPO's Mark Padilla congratulates the winning Amarillo High School Team Black with their victory at the Pantex Science Bowl 2016. More than 200 students from 37 from High schools across the Texas Panhandle gathered together with a few hundred volunteers for a meeting and competition of
X-ray focal spot locating apparatus and method
Gilbert, Hubert W.
1985-07-30
An X-ray beam finder for locating a focal spot of an X-ray tube includes a mass of X-ray opaque material having first and second axially-aligned, parallel-opposed faces connected by a plurality of substantially identical parallel holes perpendicular to the faces and a film holder for holding X-ray sensitive film tightly against one face while the other face is placed in contact with the window of an X-ray head.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
A Fast Monte Carlo Simulation for the International Linear Collider
Office of Scientific and Technical Information (OSTI)
Detector (Technical Report) | SciTech Connect A Fast Monte Carlo Simulation for the International Linear Collider Detector Citation Details In-Document Search Title: A Fast Monte Carlo Simulation for the International Linear Collider Detector The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included
Correlated electron dynamics with time-dependent quantum Monte Carlo:
Office of Scientific and Technical Information (OSTI)
Three-dimensional helium (Journal Article) | SciTech Connect Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Citation Details In-Document Search Title: Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Here the recently proposed time-dependent quantum Monte Carlo method is applied to three dimensional para- and ortho-helium atoms subjected to an external electromagnetic field with amplitude sufficient
Tests of Monte Carlo Independent Column Approximation in the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Meteorological Institute Jarvinen, Heikki Finnish Meteorological Institute Category: Modeling The Monte Carlo Independent Column Approximation (McICA) was recently introduced...
South El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.0519548, -118.0467339 Show Map Loading map... "minzoom":false,"mapping...
North El Monte, California: Energy Resources | Open Energy Information
El Monte, California: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 34.1027861, -118.0242333 Show Map Loading map... "minzoom":false,"mapping...
A Fast Monte Carlo Simulation for the International Linear Collider...
Office of Scientific and Technical Information (OSTI)
Title: A Fast Monte Carlo Simulation for the International Linear Collider Detector The following paper contains details concerning the motivation for, implementation and ...
Monte-Carlo particle dynamics in a variable specific impulse...
Office of Scientific and Technical Information (OSTI)
Title: Monte-Carlo particle dynamics in a variable specific ... accuracy without compromising the speed of the simulation. ... simulations for systems of hundred thousands of ...
Correlated electron dynamics with time-dependent quantum Monte...
Office of Scientific and Technical Information (OSTI)
Correlated electron dynamics with time-dependent quantum Monte Carlo: Three-dimensional helium Citation Details In-Document Search Title: Correlated electron dynamics with time-dep...
Cluster expansion modeling and Monte Carlo simulation of alnico...
Office of Scientific and Technical Information (OSTI)
Accepted Manuscript: Cluster expansion modeling and Monte Carlo simulation of alnico 5-7 permanent magnets This content will become publicly available on March 5, 2016 Prev Next...
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator...
Office of Scientific and Technical Information (OSTI)
Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics FLUKA is a general purpose ...
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series ...
Molecular Monte Carlo Simulations Using Graphics Processing Units...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors....
HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid Architectures
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
fidelity simulation of a diverse range of kinetic systems. Available for thumbnail of Feynman Center (505) 665-9090 Email HILO: Quasi Diffusion Accelerated Monte Carlo on Hybrid...
Mont Vernon, New Hampshire: Energy Resources | Open Energy Information
Mont Vernon, New Hampshire: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 42.8945294, -71.6742393 Show Map Loading map......
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the LandauFokkerPlanck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ?, the computational cost of the method is O(?{sup ?2}) or O(?{sup ?2}(ln?){sup 2}), depending on the underlying discretization, Milstein or EulerMaruyama respectively. This is to be contrasted with a cost of O(?{sup ?3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lvy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ?=10{sup ?5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Friction Stir Spot Welding of Advanced High Strength Steels
Hovanski, Yuri; Santella, M. L.; Grant, Glenn J.
2009-12-28
Friction stir spot welding was used to join two advanced high-strength steels using polycrystalline cubic boron nitride tooling. Numerous tool designs were employed to study the influence of tool geometry on weld joints produced in both DP780 and a hot-stamp boron steel. Tool designs included conventional, concave shouldered pin tools with several pin configurations; a number of shoulderless designs; and a convex, scrolled shoulder tool. Weld quality was assessed based on lap shear strength, microstructure, microhardness, and bonded area. Mechanical properties were functionally related to bonded area and joint microstructure, demonstrating the necessity to characterize processing windows based on tool geometry.
Unique Bioreactor Finds Algae's Sweet Spot - News Feature | NREL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Unique Bioreactor Finds Algae's Sweet Spot February 18, 2014 Close-up photo of a vial of green algae. Enlarge image Aeration helps algae grow and helps replicate real-life conditions in the Simulated Algal Growth Environment (SAGE) reactor at NREL. The reactor controls light and temperature, helping scientists determine not just what strain will grow the best, but where in the United States it may do so. Ideal strains can be harvested for their lipids, proteins, and sugars for use in biofuels.
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmoreand experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.less
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Exploring theory space with Monte Carlo reweighting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
March market review. [Spot market prices for uranium (1993)
Not Available
1993-04-01
The spot market price for uranium in unrestricted markets weakened further during March, and at month end, the NUEXCO Exchange Value had fallen $0.15, to $7.45 per pound U3O8. The Restricted American Market Penalty (RAMP) for concentrates increased $0.15, to $2.55 per pound U3O8. Ample UF6 supplies and limited demand led to a $0.50 decrease in the UF6 Value, to $25.00 per kgU as UF6, while the RAMP for UF6 increased $0.75, to $5.25 per kgU. Nine near-term uranium transactions were reported, totalling almost 3.3 million pounds equivalent U3O8. This is the largest monthly spot market volume since October 1992, and is double the volume reported in January and February. The March 31 Conversion Value was $4.25 per kgU as UF6. Beginning with the March 31 Value, NUEXCO now reports its Conversion Value in US dollars per kilogram of uranium (US$/kgU), reflecting current industry practice. The March loan market was inactive with no transactions reported. The Loan Rate remained unchanged at 3.0 percent per annum. Low demand and increased competition among sellers led to a one-dollar decrease in the SWU Value, to $65 per SWU, and the RAMP for SWU declined one dollar, to $9 per SWU.
SU-E-T-239: Monte Carlo Modelling of SMC Proton Nozzles Using TOPAS
Chung, K; Kim, J; Shin, J; Han, Y; Ju, S; Hong, C; Kim, D; Kim, H; Shin, E; Ahn, S; Chung, S; Choi, D
2014-06-01
Purpose: To expedite and cross-check the commissioning of the proton therapy nozzles at Samsung Medical Center using TOPAS. Methods: We have two different types of nozzles at Samsung Medical Center (SMC), a multi-purpose nozzle and a pencil beam scanning dedicated nozzle. Both nozzles have been modelled in Monte Carlo simulation by using TOPAS based on the vendor-provided geometry. The multi-purpose nozzle is mainly composed of wobbling magnets, scatterers, ridge filters and multi-leaf collimators (MLC). Including patient specific apertures and compensators, all the parts of the nozzle have been implemented in TOPAS following the geometry information from the vendor.The dedicated scanning nozzle has a simpler structure than the multi-purpose nozzle with a vacuum pipe at the down stream of the nozzle.A simple water tank volume has been implemented to measure the dosimetric characteristics of proton beams from the nozzles. Results: We have simulated the two proton beam nozzles at SMC. Two different ridge filters have been tested for the spread-out Bragg peak (SOBP) generation of wobbling mode in the multi-purpose nozzle. The spot sizes and lateral penumbra in two nozzles have been simulated and analyzed using a double Gaussian model. Using parallel geometry, both the depth dose curve and dose profile have been measured simultaneously. Conclusion: The proton therapy nozzles at SMC have been successfully modelled in Monte Carlo simulation using TOPAS. We will perform a validation with measured base data and then use the MC simulation to interpolate/extrapolate the measured data. We believe it will expedite the commissioning process of the proton therapy nozzles at SMC.
Diode magnetic-field influence on radiographic spot size
Ekdahl, Carl A. Jr.
2012-09-04
Flash radiography of hydrodynamic experiments driven by high explosives is a well-known diagnostic technique in use at many laboratories. The Dual-Axis Radiography for Hydrodynamic Testing (DARHT) facility at Los Alamos was developed for flash radiography of large hydrodynamic experiments. Two linear induction accelerators (LIAs) produce the bremsstrahlung radiographic source spots for orthogonal views of each experiment ('hydrotest'). The 2-kA, 20-MeV Axis-I LIA creates a single 60-ns radiography pulse. For time resolution of the hydrotest dynamics, the 1.7-kA, 16.5-MeV Axis-II LIA creates up to four radiography pulses by slicing them out of a longer pulse that has a 1.6-{micro}s flattop. Both axes now routinely produce radiographic source spot sizes having full-width at half-maximum (FWHM) less than 1 mm. To further improve on the radiographic resolution, one must consider the major factors influencing the spot size: (1) Beam convergence at the final focus; (2) Beam emittance; (3) Beam canonical angular momentum; (4) Beam-motion blur; and (5) Beam-target interactions. Beam emittance growth and motion in the accelerators have been addressed by careful tuning. Defocusing by beam-target interactions has been minimized through tuning of the final focus solenoid for optimum convergence and other means. Finally, the beam canonical angular momentum is minimized by using a 'shielded source' of electrons. An ideal shielded source creates the beam in a region where the axial magnetic field is zero, thus the canonical momentum zero, since the beam is born with no mechanical angular momentum. It then follows from Busch's conservation theorem that the canonical angular momentum is minimized at the target, at least in principal. In the DARHT accelerators, the axial magnetic field at the cathode is minmized by using a 'bucking coil' solenoid with reverse polarity to cancel out whatever solenoidal beam transport field exists there. This is imperfect in practice, because of radial variation of the total field across the cathode surface, solenoid misalignments, and long-term variability of solenoid fields for given currents. Therefore, it is useful to quantify the relative importance of canonical momentum in determining the focal spot, and to establish a systematic methodology for tuning the bucking coils for minimum spot size. That is the purpose of this article. Section II provides a theoretical foundation for understanding the relative importance of the canonical momentum. Section III describes the results of simulations used to quantify beam parameters, including the momentum, for each of the accelerators. Section IV compares the two accelerators, especially with respect to mis-tuned bucking coils. Finally, Section IV concludes with a methodology for optimizing the bucking coil settings.
Seven federally protected Mexican spotted owl chicks hatch on Los Alamos
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
National Laboratory property Federally protected Mexican spotted owl chicks hatch on LANL property Seven federally protected Mexican spotted owl chicks hatch on Los Alamos National Laboratory property Biologists located a record seven federally threatened Mexican spotted owl chicks on Los Alamos National Laboratory property during nest surveys last month. July 13, 2015 A parent owl sits with two chicks. A parent owl sits with two chicks. Contact Los Alamos National Laboratory Lorrie
Modeling Hot-Spot Contributions in Shocked High Explosives at the Mesoscale
Harrier, Danielle
2015-08-12
When looking at performance of high explosives, the defects within the explosive become very important. Plastic bonded explosives, or PBXs, contain voids of air and bonder between the particles of explosive material that aid in the ignition of the explosive. These voids collapse in high pressure shock conditions, which leads to the formation of hot spots. Hot spots are localized high temperature and high pressure regions that cause significant changes in the way the explosive material detonates. Previously hot spots have been overlooked with modeling, but now scientists are realizing their importance and new modeling systems that can accurately model hot spots are underway.
Hot spot generation in energetic materials created by long-wavelength...
Office of Scientific and Technical Information (OSTI)
RDX (1,3,5-trinitroperhydro-1,3,5-triazine), were studied by thermal-imaging microscopy. ... HOT SPOTS; INFRARED RADIATION; MICROSCOPY; REFRACTION; THERMAL CONDUCTION; ...
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions...
Office of Scientific and Technical Information (OSTI)
Monte Carlo Implementation Of Up- Or Down-Scattering Due To Collisions With Material At Finite Temperature Citation Details In-Document Search Title: Monte Carlo Implementation Of ...
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials Previous Next List J. Kim and B. Smit, J. Chem. Theory Comput. 8 (7), 2336 (2012) DOI: 10.1021ct3003699 ...
Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons...
Office of Scientific and Technical Information (OSTI)
Technical Report: Monte Carlo Hauser-Feshbach Calculations of Prompt Fission Neutrons and Gamma Rays: Application to Thermal Neutron-Induced Fission Reactions on U-235 and Pu-239 ...
Generalizing the self-healing diffusion Monte Carlo approach...
Office of Scientific and Technical Information (OSTI)
Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases Citation Details In-Document Search ...
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS...
Multiscale MonteCarlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Citation Details In-Document Search Title: Quantum Monte Carlo Calculations of Light Nuclei Using Chiral Potentials Authors: Lynn, J. E. ; Carlson, J. ; Epelbaum, E. ; Gandolfi, S. ; Gezerlis, A. ; Schwenk, A. Publication Date: 2014-11-04 OSTI Identifier: 1181024 Grant/Contract Number: AC02-05CH11231 Type: Publisher's Accepted Manuscript Journal Name: Physical Review Letters
Fast Monte Carlo for radiation therapy: the PEREGRINE Project (Conference)
Office of Scientific and Technical Information (OSTI)
| SciTech Connect Conference: Fast Monte Carlo for radiation therapy: the PEREGRINE Project Citation Details In-Document Search Title: Fast Monte Carlo for radiation therapy: the PEREGRINE Project × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and technology.
Monte Carlo Hybrid Applied to Binary Stochastic Mixtures
Energy Science and Technology Software Center (OSTI)
2008-08-11
The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.
Morphological changes in ultrafast laser ablation plumes with varying spot size
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Harilal, S. S.; Diwakar, P. K.; Polek, M. P.; Phillips, M. C.
2015-06-04
We investigated the role of spot size on plume morphology during ultrafast laser ablation of metal targets. Our results show that the spatial features of fs LA plumes are strongly dependent on the focal spot size. Two-dimensional self-emission images showed that the shape of the ultrafast laser ablation plumes changes from spherical to cylindrical with an increasing spot size from 100 to 600 ?m. The changes in plume morphology and internal structures are related to ion emission dynamics from the plasma, where broader angular ion distribution and faster ions are noticed for the smallest spot size used. The present resultsmoreclearly show that the morphological changes in the plume with spot size are independent of laser pulse width.less
SU-E-J-72: Geant4 Simulations of Spot-Scanned Proton Beam Treatment Plans
Kanehira, T; Sutherland, K; Matsuura, T; Umegaki, K; Shirato, H [Hokkaido University, Sapporo, Hokkaido (Japan)
2014-06-01
Purpose: To evaluate density inhomogeneities which can effect dose distributions for real-time image gated spot-scanning proton therapy (RGPT), a dose calculation system, using treatment planning system VQA (Hitachi Ltd., Tokyo) spot position data, was developed based on Geant4. Methods: A Geant4 application was developed to simulate spot-scanned proton beams at Hokkaido University Hospital. A CT scan (0.98 0.98 1.25 mm) was performed for prostate cancer treatment with three or four inserted gold markers (diameter 1.5 mm, volume 1.77 mm3) in or near the target tumor. The CT data was read into VQA. A spot scanning plan was generated and exported to text files, specifying the beam energy and position of each spot. The text files were converted and read into our Geant4-based software. The spot position was converted into steering magnet field strength (in Tesla) for our beam nozzle. Individual protons were tracked from the vacuum chamber, through the helium chamber, steering magnets, dose monitors, etc., in a straight, horizontal line. The patient CT data was converted into materials with variable density and placed in a parametrized volume at the isocenter. Gold fiducial markers were represented in the CT data by two adjacent voxels (volume 2.38 mm3). 600,000 proton histories were tracked for each target spot. As one beam contained about 1,000 spots, approximately 600 million histories were recorded for each beam on a blade server. Two plans were considered: two beam horizontal opposed (90 and 270 degree) and three beam (0, 90 and 270 degree). Results: We are able to convert spot scanning plans from VQA and simulate them with our Geant4-based code. Our system can be used to evaluate the effect of dose reduction caused by gold markers used for RGPT. Conclusion: Our Geant4 application is able to calculate dose distributions for spot scanned proton therapy.
May market review. [Spot market prices for uranium (1993)
Not Available
1993-06-01
Seven uranium transactions totalling nearly three million pounds equivalent U3O8 were reported during May, but only two, totalling less than 200 thousand pounds equivalent U3O8, involved concentrates. As no discretionary buying occurred during the month, and as near-term supply and demand were in relative balance, prices were steady, while both buyers and sellers appeared to be awaiting some new market development to signal the direction of future spot-market prices. The May 31, 1993, Exchange Value and the Restricted American market Penalty (RAMP) for concentrates were both unchanged at $7.10, and $2.95 per pound U3O8, respectively. NUEXCO's judgement was that transactions for significant quantities of uranium concentrates that were both deliverable in and intended for consumption in the USA could have been concluded on May 31 at $10.05 per pound U3O8. Two near-term concentrate transactions were reported in which one US utility purchased less than 200 thousand pounds equivalent U3O8 from two separate sellers. These sales occurred at price levels at or near the May 31 Exchange Value plus RAMP. No long-term uranium transactions were reported during May. Consequently, the UF6 Value decreased $0.20 to $24.30 per kgU as UF6, reflecting some weakening of the UF6 market outside the USA.
DRAMATIC CHANGE IN JUPITER'S GREAT RED SPOT FROM SPACECRAFT OBSERVATIONS
Simon, Amy A.; Wong, Michael H.; De Pater, Imke; Rogers, John H.; Orton, Glenn S.; Carlson, Robert W.; Asay-Davis, Xylar; Marcus, Philip S.
2014-12-20
Jupiter's Great Red Spot (GRS) is one of its most distinct and enduring features. Since the advent of modern telescopes, keen observers have noted its appearance and documented a change in shape from very oblong to oval, confirmed in measurements from spacecraft data. It currently spans the smallest latitude and longitude size ever recorded. Here we show that this change has been accompanied by an increase in cloud/haze reflectance as sensed in methane gas absorption bands, increased absorption at wavelengths shorter than 500nm, and increased spectral slope between 500 and 630nm. These changes occurred between 2012 and 2014, without a significant change in internal tangential wind speeds; the decreased size results in a 3.2day horizontal cloud circulation period, shorter than previously observed. As the GRS has narrowed in latitude, it interacts less with the jets flanking its north and south edges, perhaps allowing for less cloud mixing and longer UV irradiation of cloud and aerosol particles. Given its long life and observational record, we expect that future modeling of the GRS's changes, in concert with laboratory flow experiments, will drive our understanding of vortex evolution and stability in a confined flow field crucial for comparison with other planetary atmospheres.
Electrophoretic extraction of proteins from two-dimensional electrophoresis gel spots
Zhang, Jian-Shi; Giometti, C.S.; Tollaksen, S.L.
1987-09-04
After two-dimensional electrophoresis of proteins or the like, resulting in a polyacrylamide gel slab having a pattern of protein gel spots thereon, an individual protein gel spot is cored out from the slab, to form a gel spot core which is placed in an extraction tube, with a dialysis membrane across the lower end of the tube. Replicate gel spots can be cored out from replicate gel slabs and placed in the extraction tube. Molten agarose gel is poured into the extraction tube where the agarose gel hardens to form an immobilizing gel, covering the gel spot cores. The upper end portion of the extraction tube is filled with a volume of buffer solution, and the upper end is closed by another dialysis membrane. Upper and lower bodies of a buffer solution are brought into contact with the upper and lower membranes and are provided with electrodes connected to the positive and negative terminals of a dc power supply, thereby producing an electrical current which flows through the upper membrane, the volume of buffer solution, the agarose, the gel spot cores and the lower membrane. The current causes the proteins to be extracted electrophoretically from the gel spot cores, so that the extracted proteins accumulate and are contained in the space between the agarose gel and the upper membrane. 8 figs.
Hot spot generation in energetic materials created by long-wavelength
Office of Scientific and Technical Information (OSTI)
infrared radiation (Journal Article) | SciTech Connect Hot spot generation in energetic materials created by long-wavelength infrared radiation Citation Details In-Document Search Title: Hot spot generation in energetic materials created by long-wavelength infrared radiation Hot spots produced by long-wavelength infrared (LWIR) radiation in an energetic material, crystalline RDX (1,3,5-trinitroperhydro-1,3,5-triazine), were studied by thermal-imaging microscopy. The LWIR source was a CO{sub
WE-E-BRE-04: Dual Focal Spot Dose Painting for Precision Preclinical Radiobiological Investigations
Stewart, J; Lindsay, P; Jaffray, D
2014-06-15
Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mm circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations. Funding Support: this work is supported by funding the National Sciences and Engineering Research Council of Canada, and a Mitacs-accelerate fellowship. Conflict of Interest: Dr. Lindsay and Dr. Jaffray are listed as inventors of the small animal microirradiator described herein. This system has been licensed for commercial development.
April market review. [Spot market prices for uranium (1993)
Not Available
1993-05-01
The spot market price for uranium outside the USA weakened further during April, and at month end, the NUEXCO Exchange Value had fallen $0.35, to $7.10 per pound U3O8. This is the lowest Exchange Value observed in nearly twenty years, comparable to Values recorded during the low price levels of the early 1970s. The Restricted American Market Penalty (RAMP) for concentrates increased $0.40, to $2.95 per pound U3O8. Transactions for significant quantities of uranium concentrates that are both deliverable in and intended for consumption in the USA could have been concluded on April 30 at $10.05 per pound U3O8, up $0.05 from the sum of corresponding March Values. Four near-term concentrates transactions were reported, totalling nearly 1.5 million pounds equivalent U3O8. One long-term sale was reported. The UF6 Value also declined, as increased competition among sellers led to a $0.50 decrease, to $24.50 per kgU as UF6. However, the RAMP for UF6 increased $0.65, to $5.90 per kgU as UF6, reflecting an effective US market level of $30.40 per kgU. Two near term transactions were reported totalling approximately 1.1 million pounds equivalent U3O8. In total, eight uranium transactions totalling 28 million pounds equivalent U3O8 were reported, which is about average for April market activity.
Effects of High Shock Pressures and Pore Morphology on Hot Spot...
Office of Scientific and Technical Information (OSTI)
Title: Effects of High Shock Pressures and Pore Morphology on Hot Spot Mechanisms in HMX Authors: Springer, H K ; Tarver, C M ; Bastea, S Publication Date: 2015-08-20 OSTI ...
Calculation of the fast ion tail distribution for a spherically symmetric hot spot
McDevitt, C. J.; Tang, X.-Z.; Guo, Z.; Berk, H. L.
2014-10-15
The fast ion tail for a spherically symmetric hot spot is computed via the solution of a simplified Fokker-Planck collision operator. Emphasis is placed on describing the energy scaling of the fast ion distribution function in the hot spot as well as the surrounding cold plasma throughout a broad range of collisionalities and temperatures. It is found that while the fast ion tail inside the hot spot is significantly depleted, leading to a reduction of the fusion yield in this region, a surplus of fast ions is observed in the neighboring cold plasma region. The presence of this surplus of fast ions in the neighboring cold region is shown to result in a partial recovery of the fusion yield lost in the hot spot.
Assessment of Prices of Natural Gas Futures Contracts As A Predictor of Realized Spot Prices, An
Reports and Publications (EIA)
2005-01-01
This article compares realized Henry Hub spot market prices for natural gas during the three most recent winters with futures prices as they evolve from April through the following February, when trading for the March contract ends.
Duo at Santa Fe's Monte del Sol Charter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge April 21, 2015 Using nanotechnology robots to kill cancer cells LOS ALAMOS, N.M., April 21, 2015-Meghan Hill and Katelynn James of Santa Fe's Monte del Sol Charter Sol took the top prize in the 25 th New Mexico Supercomputing Challenge Tuesday at Los Alamos National Laboratory for their research project, "Using Concentrated Heat Systems to Shock the P53 Protein to Direct Cancer into
IR-based Spot Weld NDT in Automotive Applications (Conference) | SciTech
Office of Scientific and Technical Information (OSTI)
Connect IR-based Spot Weld NDT in Automotive Applications Citation Details In-Document Search Title: IR-based Spot Weld NDT in Automotive Applications Authors: Chen, Jian [1] ; Feng, Zhili [1] + Show Author Affiliations ORNL Publication Date: 2015-01-01 OSTI Identifier: 1185972 DOE Contract Number: DE-AC05-00OR22725 Resource Type: Conference Resource Relation: Conference: Thermosense XXXVII - 2015, Baltimore, MD, USA, 20150420, 20150424 Research Org: Oak Ridge National Laboratory (ORNL)
Electrophoretic extraction of proteins from two-dimensional electrophoresis gel spots
Zhang, Jian-Shi; Giometti, Carol S.; Tollaksen, Sandra L.
1989-01-01
After two-dimensional electrophoresis of proteins or the like, resulting in a polyacrylamide gel slab having a pattern of protein gel spots thereon, an individual protein gel spot is cored out from the slab, to form a gel spot core which is placed in an extraction tube, with a dialysis membrane across the lower end of the tube. Replicate gel spots can be cored out from replicate gel slabs and placed in the extraction tube. Molten agarose gel is poured into the extraction tube where the agarose gel hardens to form an immobilizing gel, covering the gel spot cores. The upper end portion of the extraction tube is filled with a volume of buffer solution, and the upper end is closed by another dialysis membrane. Upper and lower bodies of a buffer solution are brought into contact with the upper and lower membranes and are provided with electrodes connected to the positive and negative terminals of a DC power supply, thereby producing an electrical current which flows through the upper membrane, the volume of buffer solution, the agarose, the gel spot cores and the lower membrane. The current causes the proteins to be extracted electrophoretically from the gel spot cores, so that the extracted proteins accumulate and are contained in the space between the agarose gel and the upper membrane. A high percentage extraction of proteins is achieved. The extracted proteins can be removed and subjected to partial digestion by trypsin or the like, followed by two-dimensional electrophoresis, resulting in a gel slab having a pattern of peptide gel spots which can be cored out and subjected to electrophoretic extraction to extract individual peptides.
Friction Stir Spot Welding of Advanced High Strength Steels | Department of
Broader source: Energy.gov (indexed) [DOE]
Energy 09 DOE Hydrogen Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting, May 18-22, 2009 -- Washington D.C. PDF icon lm_14_grant.pdf More Documents & Publications Friction Stir Spot Welding of Advanced High Strength Steels II FY 2009 Progress Report for Lightweighting Materials - 9. Joining Pulse Pressure Forming of Lightweight Materials, Development of High Strength Superplastic Al Sheet, Friction Stir Spot Welding of Advanced High Strength Steels
Monte Carlo event generators for hadron-hadron collisions
Knowles, I.G.; Protopopescu, S.D.
1993-06-01
A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.
Soci t d exploitation du parc olien de Mont d H z cques SARL...
Soci t d exploitation du parc olien de Mont d H z cques SARL Jump to: navigation, search Name: Socit d'exploitation du parc olien de Mont d'Hzcques SARL Place:...
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal...
Office of Scientific and Technical Information (OSTI)
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: ... Title: Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: ...
Olson, R. E.; Leeper, R. J.
2013-09-15
The baseline DT ice layer inertial confinement fusion (ICF) ignition capsule design requires a hot spot convergence ratio of ∼34 with a hot spot that is formed from DT mass originally residing in a very thin layer at the inner DT ice surface. In the present paper, we propose alternative ICF capsule designs in which the hot spot is formed mostly or entirely from mass originating within a spherical volume of DT vapor. Simulations of the implosion and hot spot formation in two DT liquid layer ICF capsule concepts—the DT wetted hydrocarbon (CH) foam concept and the “fast formed liquid” (FFL) concept—are described and compared to simulations of standard DT ice layer capsules. 1D simulations are used to compare the drive requirements, the optimal shock timing, the radial dependence of hot spot specific energy gain, and the hot spot convergence ratio in low vapor pressure (DT ice) and high vapor pressure (DT liquid) capsules. 2D simulations are used to compare the relative sensitivities to low-mode x-ray flux asymmetries in the DT ice and DT liquid capsules. It is found that the overall thermonuclear yields predicted for DT liquid layer capsules are less than yields predicted for DT ice layer capsules in simulations using comparable capsule size and absorbed energy. However, the wetted foam and FFL designs allow for flexibility in hot spot convergence ratio through the adjustment of the initial cryogenic capsule temperature and, hence, DT vapor density, with a potentially improved robustness to low-mode x-ray flux asymmetry.
Wang, Z; Gao, M
2014-06-01
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 1010cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.
DOE Science Showcase - Monte Carlo Methods | OSTI, US Dept of Energy,
Office of Scientific and Technical Information (OSTI)
Office of Scientific and Technical Information Monte Carlo Methods Monte Carlo calculation methods are algorithms for solving various kinds of computational problems by using (pseudo)random numbers. Developed in the 1940s during the Manhattan Project, the Monte Carlo method signified a radical change in how scientists solved problems. Learn about the ways these methods are used in DOE's research endeavors today in "Monte Carlo Methods" by Dr. William Watson, Physicist, OSTI staff.
An investigation of the dynamic separation of spot welds under plane tensile pulses
Ma, Bohan; Fan, Chunlei; Chen, Danian Wang, Huanran; Zhou, Fenghua
2014-08-07
We performed ultra-high-speed tests for purely opening spot welds using plane tensile pulses. A gun system generated a parallel impact of a projectile plate onto a welded plate. Induced by the interactions of the release waves, the welded plate opened purely under the plane tensile pulses. We used the laser velocity interferometer system for any reflector to measure the velocity histories of the free surfaces of the free part and the spot weld of the welded plate. We then used a scanning electron microscope to investigate the recovered welded plates. We found that the interfacial failure mode was mainly a brittle fracture and the cracks propagated through the spot nugget, while the partial interfacial failure mode was a mixed fracture comprised ductile fracture and brittle fracture. We used the measured velocity histories to evaluate the tension stresses in the free part and the spot weld of the welded plate by applying the characteristic theory. We also discussed the different constitutive behaviors of the metals under plane shock loading and under uniaxial split Hopkinson pressure bar tests. We then compared the numerically simulated velocity histories of the free surfaces of the free part and the spot weld of the welded plate with the measured results. The numerical simulations made use of the fracture stress criteria, and then the computed fracture modes of the tests were compared with the recovered results.
Real-time spot size camera for pulsed high-energy radiographic machines
Watson, S.A.
1993-06-01
The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory`s Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison.
Real-time spot size camera for pulsed high-energy radiographic machines
Watson, S.A.
1993-01-01
The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory's Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison.
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Fermi surface topology and hot spot distribution in the Kondo lattice
Office of Scientific and Technical Information (OSTI)
system CeB 6 (Journal Article) | SciTech Connect Fermi surface topology and hot spot distribution in the Kondo lattice system CeB 6 Citation Details In-Document Search This content will become publicly available on September 17, 2016 Title: Fermi surface topology and hot spot distribution in the Kondo lattice system CeB 6 Authors: Neupane, Madhab ; Alidoust, Nasser ; Belopolski, Ilya ; Bian, Guang ; Xu, Su-Yang ; Kim, Dae-Jeong ; Shibayev, Pavel P. ; Sanchez, Daniel S. ; Zheng, Hao ; Chang,
Six years of monitoring annual changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1992-12-01
Fifteen dates of spring-time SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data from spring 1987 through spring 1992 were analyzed to monitor annual changes in a 150-hectare, southeastern floodplain marsh. The marsh underwent rapid changes during the six years from a swamp dominated by non-persistent, thermally tolerant macrophytes to persistent macrophyte and shrub-scrub communities as reactor discharges declined to Pen Branch. Savannah River flooding was also important in the timing of the shift of these wetland communities. SPOT HRV data proved to be an efficient and effective method to monitor trends in these wetland community changes.
Six years of monitoring annual changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1992-01-01
Fifteen dates of spring-time SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data from spring 1987 through spring 1992 were analyzed to monitor annual changes in a 150-hectare, southeastern floodplain marsh. The marsh underwent rapid changes during the six years from a swamp dominated by non-persistent, thermally tolerant macrophytes to persistent macrophyte and shrub-scrub communities as reactor discharges declined to Pen Branch. Savannah River flooding was also important in the timing of the shift of these wetland communities. SPOT HRV data proved to be an efficient and effective method to monitor trends in these wetland community changes.
Effects of minimum monitor unit threshold on spot scanning proton plan quality
Howard, Michelle Beltran, Chris; Mayo, Charles S.; Herman, Michael G.
2014-09-15
Purpose: To investigate the influence of the minimum monitor unit (MU) on the quality of clinical treatment plans for scanned proton therapy. Methods: Delivery system characteristics limit the minimum number of protons that can be delivered per spot, resulting in a min-MU limit. Plan quality can be impacted by the min-MU limit. Two sites were used to investigate the impact of min-MU on treatment plans: pediatric brain tumor at a depth of 5–10 cm; a head and neck tumor at a depth of 1–20 cm. Three-field, intensity modulated spot scanning proton plans were created for each site with the following parameter variations: min-MU limit range of 0.0000–0.0060; and spot spacing range of 2–8 mm. Comparisons were based on target homogeneity and normal tissue sparing. For the pediatric brain, two versions of the treatment planning system were also compared to judge the effects of the min-MU limit based on when it is accounted for in the optimization process (Eclipse v.10 and v.13, Varian Medical Systems, Palo Alto, CA). Results: The increase of the min-MU limit with a fixed spot spacing decreases plan quality both in homogeneous target coverage and in the avoidance of critical structures. Both head and neck and pediatric brain plans show a 20% increase in relative dose for the hot spot in the CTV and 10% increase in key critical structures when comparing min-MU limits of 0.0000 and 0.0060 with a fixed spot spacing of 4 mm. The DVHs of CTVs show min-MU limits of 0.0000 and 0.0010 produce similar plan quality and quality decreases as the min-MU limit increases beyond 0.0020. As spot spacing approaches 8 mm, degradation in plan quality is observed when no min-MU limit is imposed. Conclusions: Given a fixed spot spacing of ≤4 mm, plan quality decreases as min-MU increased beyond 0.0020. The effect of min-MU needs to be taken into consideration while planning proton therapy treatments.
Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems
Office of Scientific and Technical Information (OSTI)
(Conference) | SciTech Connect Conference: Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems Citation Details In-Document Search Title: Avoiding Carbon Bed Hot Spots in Thermal Process Off-Gas Systems Mercury has had various uses in nuclear fuel reprocessing and other nuclear processes, and so is often present in radioactive and mixed (radioactive and hazardous) wastes. Test programs performed in recent years have shown that mercury in off-gas streams from processes that treat
Effects of High Shock Pressures and Pore Morphology on Hot Spot Mechanisms
Office of Scientific and Technical Information (OSTI)
in HMX (Conference) | SciTech Connect Effects of High Shock Pressures and Pore Morphology on Hot Spot Mechanisms in HMX Citation Details In-Document Search Title: Effects of High Shock Pressures and Pore Morphology on Hot Spot Mechanisms in HMX Authors: Springer, H K ; Tarver, C M ; Bastea, S Publication Date: 2015-08-20 OSTI Identifier: 1240059 Report Number(s): LLNL-CONF-676480 DOE Contract Number: AC52-07NA27344 Resource Type: Conference Resource Relation: Conference: Presented at: 19th
Electron depletion via cathode spot dispersion of dielectric powder into an overhead plasma
Gillman, Eric D. [Naval Research Laboratory, 4555 Overlook Ave SW, Washington, District of Columbia 20375 (United States)] [Naval Research Laboratory, 4555 Overlook Ave SW, Washington, District of Columbia 20375 (United States); Foster, John E. [Department of Nuclear Engineering and Radiological Sciences (NERS), University of Michigan, 2355 Bonisteel Blvd., Ann Arbor, Michigan 48109 (United States)] [Department of Nuclear Engineering and Radiological Sciences (NERS), University of Michigan, 2355 Bonisteel Blvd., Ann Arbor, Michigan 48109 (United States)
2013-11-15
The effectiveness of cathode spot delivered dielectric particles for the purpose of plasma depletion is investigated. Here, cathode spot flows kinetically entrain and accelerate dielectric particles originally at rest into a background plasma. The time variation of the background plasma density is tracked using a cylindrical Langmuir probe biased approximately at electron saturation. As inferred from changes in the electron saturation current, depletion fractions of up to 95% are observed. This method could be exploited as a means of communications blackout mitigation for manned and unmanned reentering spacecraft as well as any high speed vehicle enveloped by a dense plasma layer.
,"Weekly Henry Hub Natural Gas Spot Price (Dollars per Million Btu)"
U.S. Energy Information Administration (EIA) Indexed Site
Henry Hub Natural Gas Spot Price (Dollars per Million Btu)" ,"Click worksheet name or tab at bottom for data" ,"Worksheet Name","Description","# Of Series","Frequency","Latest Data for" ,"Data 1","Weekly Henry Hub Natural Gas Spot Price (Dollars per Million Btu)",1,"Weekly","5/20/2016" ,"Release Date:","5/25/2016" ,"Next Release Date:","6/2/2016"
,"Henry Hub Gulf Coast Natural Gas Spot Price ($/MMBTU)"
U.S. Energy Information Administration (EIA) Indexed Site
Gulf Coast Natural Gas Spot Price ($/MMBTU)" ,"Click worksheet name or tab at bottom for data" ,"Worksheet Name","Description","# Of Series","Frequency","Latest Data for" ,"Data 1","Henry Hub Gulf Coast Natural Gas Spot Price ($/MMBTU)",1,"Daily","9/16/2013" ,"Release Date:","9/18/2013" ,"Next Release Date:","9/25/2013" ,"Excel File
On the mechanism of operation of a cathode spot cell in a vacuum arc
Mesyats, G. A.; Petrov, A. A.; Bochkarev, M. B.; Barengolts, S. A.
2014-05-05
The erosive structures formed on a tungsten cathode as a result of the motion of the cathode spot of a vacuum arc over the cathode surface have been examined. It has been found that the average mass of a cathode microprotrusion having the shape of a solidified jet is approximately equal to the mass of ions removed from the cathode within the lifetime of a cathode spot cell carrying a current of several amperes. The time of formation of a new liquid-metal jet under the action of the reactive force of the plasma ejected by the cathode spot is about 10?ns, which is comparable to the lifetime of a cell. The growth rate of a liquid-metal jet is ?10{sup 4}?cm/s. The geometric shape and size of a solidified jet are such that a new explosive emission center (spot cell) can be initiated within several nanoseconds during the interaction of the jet with the dense cathode plasma. This is the underlying mechanism of the self-sustained operation of a vacuum arc.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Casper, Katya M.; Beresh, Steven J.; Schneider, Steven P.
2014-09-09
To investigate the pressure-fluctuation field beneath turbulent spots in a hypersonic boundary layer, a study was conducted on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Controlled disturbances were created by pulsed-glow perturbations based on the electrical breakdown of air. Under quiet-flow conditions, the nozzle-wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large disturbances that can be well-resolved with high-frequency pressure transducers. A disturbance first grows into a second-mode instability wavepacket that is concentrated near its own centreline. Weaker disturbances are seen spreading from the centre. The wavesmore » grow and become nonlinear before breaking down to turbulence. The breakdown begins in the core of the packets where the wave amplitudes are largest. Second-mode waves are still evident in front of and behind the breakdown point and can be seen propagating in the spanwise direction. The turbulent core grows downstream, resulting in a spot with a classical arrowhead shape. Behind the spot, a low-pressure calmed region develops. However, the spot is not merely a localized patch of turbulence; instability waves remain an integral part. Limited measurements of naturally occurring disturbances show many similar characteristics. From the controlled disturbance measurements, the convection velocity, spanwise spreading angle, and typical pressure-fluctuation field were obtained.« less
Friction Stir Spot Welding of DP780 and Hot-Stamp Boron Steels
Santella, Michael L.; Frederick, Alan; Hovanski, Yuri; Grant, Glenn J.
2008-05-16
Friction stir spot welds were made in two high-strength steels: DP780, and a hot-stamp-boron steel with tensile strength of 1500 MPa. The spot welds were made at either 800 or 1600 rpm using either of two polycrystalline boron nitride tools. One stir tool, BN77, had the relatively common pin-tool shape. The second tool, BN46, had a convex rather than a concave shoulder profile and a much wider and shorter pin. The tools were plunged to preprogrammed depths either at a continuous rate (1-step schedule) or in two segments consisting of a relatively high rate followed by a slower rate. In all cases, the welds were completed in 4s. The range of lap-shear values were compared to values required for resistance spot welds on the same steels. The minimum value of 10.3 kN was exceeded for friction stir spot welding of DP780 using a 2-step schedule and either the BN77- or the BN46-type stir tool. The respective minimum value of 12 kN was also exceeded for the HSB steel using the 2-step process and the BN46 stir tool.
Casper, Katya M.; Beresh, Steven J.; Schneider, Steven P.
2014-09-09
To investigate the pressure-fluctuation field beneath turbulent spots in a hypersonic boundary layer, a study was conducted on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Controlled disturbances were created by pulsed-glow perturbations based on the electrical breakdown of air. Under quiet-flow conditions, the nozzle-wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large disturbances that can be well-resolved with high-frequency pressure transducers. A disturbance first grows into a second-mode instability wavepacket that is concentrated near its own centreline. Weaker disturbances are seen spreading from the centre. The waves grow and become nonlinear before breaking down to turbulence. The breakdown begins in the core of the packets where the wave amplitudes are largest. Second-mode waves are still evident in front of and behind the breakdown point and can be seen propagating in the spanwise direction. The turbulent core grows downstream, resulting in a spot with a classical arrowhead shape. Behind the spot, a low-pressure calmed region develops. However, the spot is not merely a localized patch of turbulence; instability waves remain an integral part. Limited measurements of naturally occurring disturbances show many similar characteristics. From the controlled disturbance measurements, the convection velocity, spanwise spreading angle, and typical pressure-fluctuation field were obtained.
Srinivasan, Bhuvana; Tang, Xian-Zhu
2014-10-15
In an inertial confinement fusion target, energy loss due to thermal conduction from the hot-spot will inevitably ablate fuel ice into the hot-spot, resulting in a more massive but cooler hot-spot, which negatively impacts fusion yield. Hydrodynamic mix due to Rayleigh-Taylor instability at the gas-ice interface can aggravate the problem via an increased gas-ice interfacial area across which energy transfer from the hot-spot and ice can be enhanced. Here, this mix-enhanced transport effect on hot-spot fusion-performance degradation is quantified using contrasting 1D and 2D hydrodynamic simulations, and its dependence on effective acceleration, Atwood number, and ablation speed is identified.
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.
2012-07-01
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)
Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He
Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.
2005-09-30
A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.
Message from Mont Call for Open House Volunteers | Jefferson Lab
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Mont Call for Open House Volunteers Message from Hugh Montgomery: Call for Open House Volunteers Dear Colleagues, The Open House - Jefferson Lab's most important and largest public outreach event - is April 30, and I am writing to ask for your help. The key to the success of the Open House is our volunteers. In 2014, about 6,000 people attended the Open House, and we are expecting a similar turnout this year. The visitors were excited to see many of the lab's facilities and were interested to
Communication: Water on hexagonal boron nitride from diffusion Monte Carlo
Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alf, Dario; Lilienfeld, O. Anatole von
2015-05-14
Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of ?84 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (OSTI)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (OSTI)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics Citation Details In-Document Search Title: Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hybrid Deterministic/Monte Carlo Solutions to the Neutron Transport k-Eigenvalue Problem with a Comparison to Pure Monte Carlo Solutions Jeffrey A. Willert Los Alamos National Laboratory September 16, 2013 Joint work with: Dana Knoll (LANL), Ryosuke Park (LANL), and C. T. Kelley (NCSU) Jeffrey A. Willert Hybrid k-Eigenvalue Methods September 16, 2013 1 / 25 CASL-U-2013-0309-000 1 Introduction 2 Nonlinear Diffusion Acceleration for k-Eigenvalue Problems 3 Hybrid Methods 4 Classic Monte Carlo
Influence of hot spot features on the initiation characteristics of heterogeneous nitromethane
Dattelbaum, Dana M; Sheffield, Stephen A; Stahl, David B; Dattelbaum, Andrew M; Engelke, Ray
2010-01-01
To gain insights into the critical hot spot features influencing energetic materials initiation characteristics, well-defined micron-scale particles have been intentionally introduced into the homogeneous explosive nitromethane (NM). Two types of potential hot spot origins have been examined - shock impedance mismatches using solid silica beads, and porosity using hollow microballoons - as well as their sizes and inter-particle separations. Here, we present the results of several series of gas gun-driven plate impact experiments on NM/particle mixtures with well-controlled shock inputs. Detailed insights into the nature of the reactive flow during the build-up to detonation have been obtained from the response of in-situ electromagnetic gauges, and the data have been used to establish Pop-plots (run-distance-to-detonation vs. shock input pressure) for the mixtures. Comparisons of sensitization effects and energy release characteristics relative to the initial shock front between the solid and hollow beads are presented.
Projectile containing metastable intermolecular composites and spot fire method of use
Asay, Blaine W.; Son, Steven F.; Sanders, V. Eric; Foley, Timothy; Novak, Alan M.; Busse, James R.
2012-07-31
A method for altering the course of a conflagration involving firing a projectile comprising a powder mixture of oxidant powder and nanosized reductant powder at velocity sufficient for a violent reaction between the oxidant powder and the nanosized reductant powder upon impact of the projectile, and causing impact of the projectile at a location chosen to draw a main fire to a spot fire at such location and thereby change the course of the conflagration, whereby the air near the chosen location is heated to a temperature sufficient to cause a spot fire at such location. The invention also includes a projectile useful for such method and said mixture preferably comprises a metastable intermolecular composite.
Vertically-tapered optical waveguide and optical spot transformer formed therefrom
Bakke, Thor; Sullivan, Charles T.
2004-07-27
An optical waveguide is disclosed in which a section of the waveguide core is vertically tapered during formation by spin coating by controlling the width of an underlying mesa structure. The optical waveguide can be formed from spin-coatable materials such as polymers, sol-gels and spin-on glasses. The vertically-tapered waveguide section can be used to provide a vertical expansion of an optical mode of light within the optical waveguide. A laterally-tapered section can be added adjacent to the vertically-tapered section to provide for a lateral expansion of the optical mode, thereby forming an optical spot-size transformer for efficient coupling of light between the optical waveguide and a single-mode optical fiber. Such a spot-size transformer can also be added to a III-V semiconductor device by post processing.
Eutectic structures in friction spot welding joint of aluminum alloy to copper
Shen, Junjun Suhuddin, Uceu F. H.; Cardillo, Maria E. B.; Santos, Jorge F. dos
2014-05-12
A dissimilar joint of AA5083 Al alloy and copper was produced by friction spot welding. The Al-MgCuAl{sub 2} eutectic in both coupled and divorced manners were found in the weld. At a relatively high temperature, mass transport of Cu due to plastic deformation, material flow, and atomic diffusion, combined with the alloy system of AA5083 are responsible for the ternary eutectic melting.
Macrophyte mapping in ten lakes of South Carolina with multispectral SPOT HRV data
Mackey, H.E. Jr.
1989-01-01
Fall and spring multispectral SPOT HRV data for 1987 and 1988 were used to evaluate the macrophyte distributions in ten freshwater reservoirs of South Carolina. The types of macrophyte and wetland communities present along the shoreline of the lakes varied depending on the age, water level fluctuations, water quality, and basin morphology. Seasonal satellite data were important for evaluation of the extent of persistent versus non-persistent macrophyte communities in the lakes. This paper contains only the view graphs of this process.
X marks the spot: Researchers confirm novel method for controlling plasma
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
rotation to improve fusion performance | Princeton Plasma Physics Lab X marks the spot: Researchers confirm novel method for controlling plasma rotation to improve fusion performance By Raphael Rosen June 23, 2015 Tweet Widget Google Plus One Share on Facebook Representative plasma geometries, with the X-point location circled in red / Reprinted from T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001 (2015). Copyright 2015 by the American Physical Society. Representative plasma
X marks the spot: Researchers confirm novel method for controlling plasma
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
rotation to improve fusion performance | Princeton Plasma Physics Lab X marks the spot: Researchers confirm novel method for controlling plasma rotation to improve fusion performance By Raphael Rosen June 23, 2015 Tweet Widget Google Plus One Share on Facebook Representative plasma geometries, with the X-point location circled in red / Reprinted from T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001 (2015). Copyright 2015 by the American Physical Society. Representative plasma
Spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB
Harris, B.W.
1984-11-29
A simple, sensitive and specific spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB, is described. Upon the application of the composition of matter of the subject invention to samples containing in excess of 0.1 mg of this explosive, a bright orange color results. Interfering species such as TNT and Tetryl can be removed by first treating the sample with a solvent which does not dissolve the TATB, but readily dissolves these interfering explosives.
Spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB
Harris, Betty W.
1986-01-01
A simple, sensitive and specific spot test for 1,3,5-triamino-2,4,6-trinitrobenzene, TATB, is described. Upon the application of the composition of matter of the present invention to samples containing in excess of 0.1 mg of this explosive, a bright orange color results. Interfering species such as TNT and Tetryl can be removed by first treating the sample with a solvent which does not dissolve much of the TATB, but readily dissolves these explosives.
SU-E-T-73: Commissioning of a Treatment Planning System for Proton Spot Scanning
Saini, J; Kang, Y; Schultz, L; Nicewonger, D; Herrera, M; Wong, T; Bowen, S; Bloch, C
2014-06-01
Purpose: A treatment planning system (TPS) was commissioned for clinical use with a fixed beam line proton delivery system. An outline of the data collection, modeling, and verification is provided. Methods: Beam data modeling for proton spot scanning in CMS Xio TPS requires the following measurements: (i) integral depth dose curves (IDDCs); (ii) absolute dose calibration; and (iii) beam spot characteristics. The IDDCs for 18 proton energies were measured using an integrating detector in a single spot field in a water phantom. Absolute scaling of the IDDCs were performed based on ion chamber measurements in mono-energetic 1010 cm{sup 2} fields in water. Beam spot shapes were measured in air using a flat panel scintillator detector at multiple planes. For beam model verification, more than 45 uniform dose phantom and patient plans were generated. These plans were used to measure range, point dose, and longitudinal and lateral profiles. Tolerances employed for verification are: point dose and longitudinal profiles, 2%; range, 1 mm; FWHM for lateral profiles, 2 mm; and patient plan dose distribution, gamma index of >90% at 3%/3 mm criteria. Results: More than 97% of the point dose measurements out of 115 were within +/-2% with maximum deviation of 3%. 98% of the ranges measured were within 1 mm with maximum deviation of 1.4mm. The normalized depth doses were within 2% at all depths. The maximum error in FWHM of lateral profiles was found to be less than 2mm. For 5 patient plans representing different anatomic sites, a total of 38 planes for 12 beams were analyzed for gamma index with average value of 99% and minimum of 94%. Conclusions: The planning system is successfully commissioned and can be safely deployed for clinical use. Measurements of IDDCs on user beam are highly recommended instead of using standard beam IDDCs.
Friction Stir Spot Welding (FSSW) of Advanced High Strength Steel (AHSS)
Santella, M. L.; Hovanski, Yuri; Pan, Tsung-Yu
2012-04-16
Friction stir spot welding (FSSW) is applied to join advanced high strength steels (AHSS): galvannealed dual phase 780 MPa steel (DP780GA), transformation induced plasticity 780 MPa steel (TRIP780), and hot-stamped boron steel (HSBS). A low-cost Si3N4 ceramic tool was developed and used for making welds in this study instead of polycrystalline cubic boron nitride (PCBN) material used in earlier studies. FSSW has the advantages of solid-state, low-temperature process, and the ability of joining dissimilar grade of steels and thicknesses. Two different tool shoulder geometries, concave with smooth surface and convex with spiral pattern, were used in the study. Welds were made by a 2-step displacement control process with weld time of 4, 6, and 10 seconds. Static tensile lap-shear strength achieved 16.4 kN for DP780GA-HSBS and 13.2kN for TRIP780-HSBS, above the spot weld strength requirements by AWS. Nugget pull-out was the failure mode of the joint. The joining mechanism was illustrated from the cross-section micrographs. Microhardness measurement showed hardening in the upper sheet steel (DP780GA or TRIP780) in the weld, but softening of HSBS in the heat-affect zone (HAZ). The study demonstrated the feasibility of making high-strength AHSS spot welds with low-cost tools.
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-Andr
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 201090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (201090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [Diagnostic x-ray shielding design based on an empirical model of photon attenuation, Health Phys. 44, 507517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities, Med. Phys. 34, 13981404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Monte Carlo prompt dose calculations for the National Ingition Facility
Latkowski, J.F.; Phillips, T.W.
1997-01-01
During peak operation, the National Ignition Facility (NIF) will conduct as many as 600 experiments per year and attain deuterium- tritium fusion yields as high as 1200 MJ/yr. The radiation effective dose equivalent (EDE) to workers is limited to an average of 03 mSv/yr (30 mrem/yr) in occupied areas of the facility. Laboratory personnel determined located outside the facility will receive EDEs <= 0.5 mSv/yr (<= 50 mrem/yr). The total annual occupational EDE for the facility will be maintained at <= 0.1 person-Sv/yr (<= 10 person- rem/yr). To ensure that prompt EDEs meet these limits, three- dimensional Monte Carlo calculations have been completed.
Quantum Monte Carlo simulation of spin-polarized H
Markic, L. Vranjes; Boronat, J.; Casulleras, J.
2007-02-01
The ground-state properties of spin polarized hydrogen H{down_arrow} are obtained by means of diffusion Monte Carlo calculations. Using the most accurate to date ab initio H{down_arrow}-H{down_arrow} interatomic potential we have studied its gas phase, from the very dilute regime until densities above its freezing point. At very small densities, the equation of state of the gas is very well described in terms of the gas parameter {rho}a{sup 3}, with a the s-wave scattering length. The solid phase has also been studied up to high pressures. The gas-solid phase transition occurs at a pressure of 173 bar, a much higher value than suggested by previous approximate descriptions.
Improved version of the PHOBOS Glauber Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
“Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more » Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less
Modeling granular phosphor screens by Monte Carlo methods
Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.
2006-12-15
The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup } EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup } radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. All phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.
The effect of laser spot shapes on polar-direct-drive implosions on the National Ignition Facility
Weilacher, F.; Radha, P. B. Collins, T. J. B.; Marozas, J. A.
2015-03-15
Ongoing polar-direct-drive (PDD) implosions on the National Ignition Facility (NIF) [J. D. Lindl and E. I. Moses, Phys. Plasmas 18, 050901 (2011)] use existing NIF hardware, including indirect-drive phase plates. This limits the performance achievable in these implosions. Spot shapes are identified that significantly improve the uniformity of PDD NIF implosions; outer surface deviation is reduced by a factor of 7 at the end of the laser pulse and hot-spot distortion is reduced by a factor of 2 when the shell has converged by a factor of ?10. As a result, the neutron yield increases by approximately a factor of 2. This set of laser spot shapes is a combination of circular and elliptical spots, along with elliptical spot shapes modulated by an additional higher-intensity ellipse offset from the center of the beam. This combination is motivated in this paper. It is also found that this improved implosion uniformity is obtained independent of the heat conduction model. This work indicates that significant improvement in performance can be obtained robustly with the proposed spot shapes.
MODELING OF HIGH SPEED FRICTION STIR SPOT WELDING USING A LAGRANGIAN FINITE ELEMENT APPROACH
Miles, Michael; Karki, U.; Woodward, C.; Hovanski, Yuri
2013-09-03
Friction stir spot welding (FSSW) has been shown to be capable of joining steels of very high strength, while also being very flexible in terms of controlling the heat of welding and the resulting microstructure of the joint. This makes FSSW a potential alternative to resistance spot welding (RSW) if tool life is sufficiently high, and if machine spindle loads are sufficiently low so that the process can be implemented on an industrial robot. Robots for spot welding can typically sustain vertical loads of about 8kN, but FSSW at tool speeds of less than 3000 rpm cause loads that are too high, in the range of 11-14 kN. Therefore, in the current work tool speeds of 3000 rpm and higher were employed, in order to generate heat more quickly and to reduce welding loads to acceptable levels. The FSSW process was modeled using a finite element approach with the Forge® software package. An updated Lagrangian scheme with explicit time integration was employed to model the flow of the sheet material, subjected to boundary conditions of a rotating tool and a fixed backing plate [3]. The modeling approach can be described as two-dimensional, axisymmetric, but with an aspect of three dimensions in terms of thermal boundary conditions. Material flow was calculated from a velocity field which was two dimensional, but heat generated by friction was computed using a virtual rotational velocity component from the tool surface. An isotropic, viscoplastic Norton-Hoff law was used to model the evolution of material flow stress as a function of strain, strain rate, and temperature. The model predicted welding temperatures and the movement of the joint interface with reasonable accuracy for the welding of a dual phase 980 steel.
,"Henry Hub Natural Gas Spot Price (Dollars per Million Btu)"
U.S. Energy Information Administration (EIA) Indexed Site
Annual",2015 ,"Release Date:","5/25/2016" ,"Next Release Date:","6/2/2016" ,"Excel File Name:","rngwhhda.xls" ,"Available from Web Page:","http://tonto.eia.gov/dnav/ng/hist/rngwhhda.htm" ,"Source:" ,"For Help, Contact:","infoctr@eia.doe.gov" ,,"(202) 586-8800",,,"5/25/2016 12:07:41 PM" "Back to Contents","Data 1: Henry Hub Natural Gas Spot Price
Numerical studies of third-harmonic generation in laser filament in air perturbed by plasma spot
Feng Liubin; Lu Xin; Liu Xiaolong; Li Yutong; Chen Liming; Ma Jinglong; Dong Quanli; Wang Weimin; Xi Tingting; Sheng Zhengming; Zhang Jie; He Duanwei
2012-07-15
Third-harmonic emission from laser filament intercepted by plasma spot is studied by numerical simulations. Significant enhancement of the third-harmonic generation is obtained due to the disturbance of the additional plasma. The contribution of the pure plasma effect and the possible plasma-enhanced third-order susceptibility on the third-harmonic generation enhancement are compared. It is shown that the plasma induced cancellation of destructive interference [Y. Liu et al., Opt. Commun. 284, 4706 (2011)] of two-colored filament is the dominant mechanism of the enhancement of third-harmonic generation.
Quantum Monte Carlo for electronic structure: Recent developments and applications
Rodriquez, M. M.S.
1995-04-01
Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C{sub 2}H and C{sub 2}H{sub 2}. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is included.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the inclusion of the missing outgoing self-attenuation that amounts to up to 15%.
Khodabakhshi, F.; Kazeminezhad, M., E-mail: mkazemi@sharif.edu; Kokabi, A.H.
2012-07-15
Constrained groove pressing as a severe plastic deformation method is utilized to produce ultra-fine grained low carbon steel sheets. The ultra-fine grained sheets are joined via resistance spot welding process and the characteristics of spot welds are investigated. Resistance spot welding process is optimized for welding of the sheets with different severe deformations and their results are compared with those of as-received samples. The effects of failure mode and expulsion on the performance of ultra-fine grained sheet spot welds have been investigated in the present paper and the welding current and time of resistance spot welding process according to these subjects are optimized. Failure mode and failure load obtained in tensile-shear test, microhardness, X-ray diffraction, transmission electron microscope and scanning electron microscope images have been used to describe the performance of spot welds. The region between interfacial to pullout mode transition and expulsion limit is defined as the optimum welding condition. The results show that optimum welding parameters (welding current and welding time) for ultra-fine grained sheets are shifted to lower values with respect to those for as-received specimens. In ultra-fine grained sheets, one new region is formed named recrystallized zone in addition to fusion zone, heat affected zone and base metal. It is shown that microstructures of different zones in ultra-fine grained sheets are finer than those of as-received sheets. - Highlights: Black-Right-Pointing-Pointer Resistance spot welding process is optimized for joining of UFG steel sheets. Black-Right-Pointing-Pointer Optimum welding current and time are decreased with increasing the CGP pass number. Black-Right-Pointing-Pointer Microhardness at BM, HAZ, FZ and recrystallized zone is enhanced due to CGP.
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte
Office of Scientific and Technical Information (OSTI)
Carlo study (Journal Article) | SciTech Connect Journal Article: Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study Citation Details In-Document Search Title: Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant
Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
New Mexico Supercomputing Challenge 5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James took the top prize for their research project April 21, 2015 Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Katelynn James, left, and Meghan Hill of Monte del Sol Charter School in Santa Fe. Contact Los Alamos National Laboratory Steve Sandoval
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficient as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.
Monte Carlo analysis of localization errors in magnetoencephalography
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
High order Chin actions in path integral Monte Carlo
Sakkos, K.; Casulleras, J.; Boronat, J.
2009-05-28
High order actions proposed by Chin have been used for the first time in path integral Monte Carlo simulations. Contrary to the Takahashi-Imada action, which is accurate to the fourth order only for the trace, the Chin action is fully fourth order, with the additional advantage that the leading fourth-order error coefficients are finely tunable. By optimizing two free parameters entering in the new action, we show that the time step error dependence achieved is best fitted with a sixth order law. The computational effort per bead is increased but the total number of beads is greatly reduced and the efficiency improvement with respect to the primitive approximation is approximately a factor of 10. The Chin action is tested in a one-dimensional harmonic oscillator, a H{sub 2} drop, and bulk liquid {sup 4}He. In all cases a sixth-order law is obtained with values of the number of beads that compare well with the pair action approximation in the stringent test of superfluid {sup 4}He.
Pseudopotentials for quantum Monte Carlo studies of transition metal oxides
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Krogel, Jaron T.; Santana Palacio, Juan A.; Reboredo, Fernando A.
2016-02-22
Quantum Monte Carlo (QMC) calculations of transition metal oxides are partially limited by the availability of high-quality pseudopotentials that are both accurate in QMC and compatible with major plane-wave electronic structure codes. We have generated a set of neon-core pseudopotentials with small cutoff radii for the early transition metal elements Sc to Zn within the local density approximation of density functional theory. The pseudopotentials have been directly tested for accuracy within QMC by calculating the first through fourth ionization potentials of the isolated transition metal (M) atoms and the binding curve of each M-O dimer. We find the ionization potentialsmore » to be accurate to 0.16(1) eV, on average, relative to experiment. The equilibrium bond lengths of the dimers are within 0.5(1)% of experimental values, on average, and the binding energies are also typically accurate to 0.18(3) eV. The level of accuracy we find for atoms and dimers is comparable to what has recently been observed for bulk metals and oxides using the same pseudopotentials. Our QMC pseudopotential results compare well with the findings of previous QMC studies and benchmark quantum chemical calculations.« less
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Monitoring seasonal and annual wetland changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1989-12-31
Eleven dates of SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data for 1987, 1988, and 1989 were evaluated to determine seasonal and annual changes in a 400-hectare, southeastern freshwater marsh. Early April through mid-May was the best time to discriminate among the cypress (Taxodium distichum)/water tupelo (Nyssa acquatica) swamp forest and the non-persistent (Ludwigia spp.) and persistent (Typha spp.) stands in this wetlands. Furthermore, a ten-fold decrease in flow rate from 11 cubic meters per sec (cms) in 1987 to one cms in 1988 was recorded in the marsh followed by a shift to drier wetland communities. The Savannah River Site (SRS), maintained by the US Department of Energy, is a 777 km{sup 2} area located in south central South Carolina. Five tributaries of the Savannah River run southwest through the SRS and into the floodplain swamp of the Savannah River. This paper describes the use of SPOT HRV data to monitor seasonal and annual trends in one of these swamp deltas, Pen Branch Delta, during a three-year period, 1987--1989.
Monitoring seasonal and annual wetland changes in a freshwater marsh with SPOT HRV data
Mackey, H.E. Jr.
1989-01-01
Eleven dates of SPOT HRV data along with near-concurrent vertical aerial photographic and phenological data for 1987, 1988, and 1989 were evaluated to determine seasonal and annual changes in a 400-hectare, southeastern freshwater marsh. Early April through mid-May was the best time to discriminate among the cypress (Taxodium distichum)/water tupelo (Nyssa acquatica) swamp forest and the non-persistent (Ludwigia spp.) and persistent (Typha spp.) stands in this wetlands. Furthermore, a ten-fold decrease in flow rate from 11 cubic meters per sec (cms) in 1987 to one cms in 1988 was recorded in the marsh followed by a shift to drier wetland communities. The Savannah River Site (SRS), maintained by the US Department of Energy, is a 777 km{sup 2} area located in south central South Carolina. Five tributaries of the Savannah River run southwest through the SRS and into the floodplain swamp of the Savannah River. This paper describes the use of SPOT HRV data to monitor seasonal and annual trends in one of these swamp deltas, Pen Branch Delta, during a three-year period, 1987--1989.
SMART II : the spot market agent research tool version 2.0.
North, M. J. N.
2000-12-14
Argonne National Laboratory (ANL) has worked closely with Western Area Power Administration (Western) over many years to develop a variety of electric power marketing and transmission system models that are being used for ongoing system planning and operation as well as analytic studies. Western markets and delivers reliable, cost-based electric power from 56 power plants to millions of consumers in 15 states. The Spot Market Agent Research Tool Version 2.0 (SMART II) is an investigative system that partially implements some important components of several existing ANL linear programming models, including some used by Western. SMART II does not implement a complete model of the Western utility system but it does include several salient features of this network for exploratory purposes. SMART II uses a Swarm agent-based framework. SMART II agents model bulk electric power transaction dynamics with recognition for marginal costs as well as transmission and generation constraints. SMART II uses a sparse graph of nodes and links to model the electric power spot market. The nodes represent power generators and consumers with distinct marginal decision curves and varying investment capital as well individual learning parameters. The links represent transmission lines with individual capacities taken from a range of central distribution, outlying distribution and feeder line types. The application of SMART II to electric power systems studies has produced useful results different from those often found using more traditional techniques. Use of the advanced features offered by the Swarm modeling environment simplified the creation of the SMART II model.
Joint strength in high speed friction stir spot welded DP 980 steel
Saunders, Nathan; Miles, Michael; Hartman, Trent; Hovanski, Yuri; Hong, Sung Tae; Steel, Russell
2014-05-01
High speed friction stir spot welding was applied to 1.2 mm thick DP 980 steel sheets under different welding conditions, using PCBN tools. The range of vertical feed rates used during welding was 2.5 mm 102 mm per minute, while the range of spindle speeds was 2500 6000 rpm. Extended testing was carried out for five different sets of welding conditions, until tool failure. These welding conditions resulted in vertical welding loads of 3.6 8.2 kN and lap shear tension failure loads of 8.9 11.1 kN. PCBN tools were shown, in the best case, to provide lap shear tension fracture loads at or above 9 kN for 900 spot welds, after which tool failure caused a rapid drop in joint strength. Joint strength was shown to be strongly correlated to bond area, which was measured from weld cross sections. Failure modes of the tested joints were a function of bond area and softening that occurred in the heat-affected zone.
Wear testing of friction stir spot welding tools for joining of DP 980 Steel
Ridges, Chris; Miles, Michael; Hovanski, Yuri; Peterson, Jeremy; Steel, Russell
2011-06-06
Friction stir spot welding has been shown to be a viable method of joining ultra high strength steel (UHSS), both in terms of joint strength and process cycle time. However, the cost of tooling must be reasonable in order for this method to be adopted as an industrial process. Several tooling materials have been evaluated in prior studies, including silicon nitride and polycrystalline cubic boron nitride (PCBN). Recently a new tool alloy has been developed, where a blend of PCBN and tungsten rhenium (W-Re) was used in order to improve the toughness of the tool. Wear testing results are presented for two of these alloys: one with a composition of 60% PCBN and 40% W-Re (designated as Q60), and one with 70% PCBN and 30% W-Re (designated at Q70). The sheet material used for all wear testing was DP 980. Tool profiles were measured periodically during the testing process in order to show the progression of wear as a function of the number of spots produced. Lap shear testing was done each time a tool profile was taken in order to show the relationship between tool wear and joint strength. For the welding parameters chosen for this study the Q70 tool provided the best combination of wear resistance and joint strength.
Sensitivity of inertial confinement fusion hot spot properties to the deuterium-tritium fuel adiabat
Melvin, J.; Lim, H.; Rana, V.; Glimm, J.; Cheng, B.; Sharp, D. H.; Wilson, D. C.
2015-02-15
We determine the dependence of key Inertial Confinement Fusion (ICF) hot spot simulation properties on the deuterium-tritium fuel adiabat, here modified by addition of energy to the cold shell. Variation of this parameter reduces the simulation to experiment discrepancy in some, but not all, experimentally inferred quantities. Using simulations with radiation drives tuned to match experimental shots N120321 and N120405 from the National Ignition Campaign (NIC), we carry out sets of simulations with varying amounts of added entropy and examine the sensitivities of important experimental quantities. Neutron yields, burn widths, hot spot densities, and pressures follow a trend approaching their experimentally inferred quantities. Ion temperatures and areal densities are sensitive to the adiabat changes, but do not necessarily converge to their experimental quantities with the added entropy. This suggests that a modification to the simulation adiabat is one of, but not the only explanation of the observed simulation to experiment discrepancies. In addition, we use a theoretical model to predict 3D mix and observe a slight trend toward less mixing as the entropy is enhanced. Instantaneous quantities are assessed at the time of maximum neutron production, determined dynamically within each simulation. These trends contribute to ICF science, as an effort to understand the NIC simulation to experiment discrepancy, and in their relation to the high foot experiments, which features a higher adiabat in the experimental design and an improved neutron yield in the experimental results.
DEFINING THE 'BLIND SPOT' OF HINODE EIS AND XRT TEMPERATURE MEASUREMENTS
Winebarger, Amy R.; Cirtain, Jonathan; Mulu-Moore, Fana [NASA Marshall Space Flight Center, VP 62, Huntsville, AL 35812 (United States); Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States); Schmelz, Joan T. [Physics Department, University of Memphis, Memphis, TN 38152 (United States); Golub, Leon [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Kobayashi, Ken, E-mail: amy.r.winebarger@nasa.gov [Center for Space Plasma and Aeronomic Research, 320 Sparkman Dr, Huntsville, AL 35805 (United States)
2012-02-20
Observing high-temperature, low emission measure plasma is key to unlocking the coronal heating problem. With current instrumentation, a combination of EUV spectral data from Hinode Extreme-ultraviolet Imaging Spectrometer (EIS; sensitive to temperatures up to 4 MK) and broadband filter data from Hinode X-ray Telescope (XRT; sensitive to higher temperatures) is typically used to diagnose the temperature structure of the observed plasma. In this Letter, we demonstrate that a 'blind spot' exists in temperature-emission measure space for combined Hinode EIS and XRT observations. For a typical active region core with significant emission at 3-4 MK, Hinode EIS and XRT are insensitive to plasma with temperatures greater than {approx}6 MK and emission measures less than {approx}10{sup 27} cm{sup -5}. We then demonstrate that the temperature and emission measure limits of this blind spot depend upon the temperature distribution of the plasma along the line of sight by considering a hypothetical emission measure distribution sharply peaked at 1 MK. For this emission measure distribution, we find that EIS and XRT are insensitive to plasma with emission measures less than {approx}10{sup 26} cm{sup -5}. We suggest that a spatially and spectrally resolved 6-24 Angstrom-Sign spectrum would improve the sensitivity to these high-temperature, low emission measure plasma.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2006-04-28
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS). DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. The critical fusion zone sizes to ensure nugget pull-out failure mode are developed for both DP800 and TRIP800 using the limit load based analytical model and the micro-hardness measurements of the weld cross sections. Static weld strength tests using cross tension samples were performed on the joint populations with controlled fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied using statistical data analysis tools. The results in this study show that the conventional weld size of 4 t can not produce nugget pullout mode for both the DP800 and TRIP800 materials. The results also suggest that performance based spot weld acceptance criteria should be developed for different AHSS spot welds.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Duo at Santa Fe's Monte del Sol Charter School takes top award...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
5th New Mexico Supercomputing Challenge Duo at Santa Fe's Monte del Sol Charter School takes top award in 25th New Mexico Supercomputing Challenge Meghan Hill and Katelynn James...
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method...
Office of Scientific and Technical Information (OSTI)
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Citation Details In-Document Search Title: A Geant4 Implementation of a Novel ...
Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma
Office of Scientific and Technical Information (OSTI)
rocket (Journal Article) | SciTech Connect Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket Citation Details In-Document Search Title: Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket The self-consistent mathematical model in a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is examined. Of particular importance is the effect of a magnetic nozzle in enhancing the axial momentum of the exhaust. Also, different
Monte-Carlo simulation of noise in hard X-ray Transmission Crystal
Office of Scientific and Technical Information (OSTI)
Spectrometers: Identification of contributors to the background noise and shielding optimization (Journal Article) | SciTech Connect Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: Identification of contributors to the background noise and shielding optimization Citation Details In-Document Search Title: Monte-Carlo simulation of noise in hard X-ray Transmission Crystal Spectrometers: Identification of contributors to the background noise and shielding
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Particle Splitting for Monte-Carlo Simulation of the National Ignition
Office of Scientific and Technical Information (OSTI)
Facility (Conference) | SciTech Connect Particle Splitting for Monte-Carlo Simulation of the National Ignition Facility Citation Details In-Document Search Title: Particle Splitting for Monte-Carlo Simulation of the National Ignition Facility The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is scheduled for completion in 2009. Thereafter, experiments will commence in which capsules of DT will be imploded, generating neutrons, gammas, x-rays, and other
Testing the Monte Carlo-mean field approximation in the one-band Hubbard
Office of Scientific and Technical Information (OSTI)
model (Journal Article) | SciTech Connect Testing the Monte Carlo-mean field approximation in the one-band Hubbard model Citation Details In-Document Search Title: Testing the Monte Carlo-mean field approximation in the one-band Hubbard model Authors: Mukherjee, Anamitra ; Patel, Niravkumar D. ; Dong, Shuai ; Johnston, Steve ; Moreo, Adriana ; Dagotto, Elbio Publication Date: 2014-11-21 OSTI Identifier: 1180511 Type: Publisher's Accepted Manuscript Journal Name: Physical Review B Additional
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
MONTE CARLO SIMULATION OF METASTABLE OXYGEN PHOTOCHEMISTRY IN COMETARY ATMOSPHERES
Bisikalo, D. V.; Shematovich, V. I. [Institute of Astronomy of the Russian Academy of Sciences, Moscow (Russian Federation); Grard, J.-C.; Hubert, B. [Laboratory for Planetary and Atmospheric Physics (LPAP), University of Lige, Lige (Belgium); Jehin, E.; Decock, A. [Origines Cosmologiques et Astrophysiques (ORCA), University of Lige (Belgium); Hutsemkers, D. [Extragalactic Astrophysics and Space Observations (EASO), University of Lige (Belgium); Manfroid, J., E-mail: B.Hubert@ulg.ac.be [High Energy Astrophysics Group (GAPHE), University of Lige (Belgium)
2015-01-01
Cometary atmospheres are produced by the outgassing of material, mainly H{sub 2}O, CO, and CO{sub 2} from the nucleus of the comet under the energy input from the Sun. Subsequent photochemical processes lead to the production of other species generally absent from the nucleus, such as OH. Although all comets are different, they all have a highly rarefied atmosphere, which is an ideal environment for nonthermal photochemical processes to take place and influence the detailed state of the atmosphere. We develop a Monte Carlo model of the coma photochemistry. We compute the energy distribution functions (EDF) of the metastable O({sup 1}D) and O({sup 1}S) species and obtain the red (630nm) and green (557.7nm) spectral line shapes of the full coma, consistent with the computed EDFs and the expansion velocity. We show that both species have a severely non-Maxwellian EDF, that results in broad spectral lines and the suprathermal broadening dominates due to the expansion motion. We apply our model to the atmosphere of comet C/1996 B2 (Hyakutake) and 103P/Hartley 2. The computed width of the green line, expressed in terms of speed, is lower than that of the red line. This result is comparable to previous theoretical analyses, but in disagreement with observations. We explain that the spectral line shape does not only depend on the exothermicity of the photochemical production mechanisms, but also on thermalization, due to elastic collisions, reducing the width of the emission line coming from the O({sup 1}D) level, which has a longer lifetime.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Utility of Monte Carlo Modelling for Holdup Measurements.
Belian, Anthony P.; Russo, P. A.; Weier, Dennis R. ,
2005-01-01
Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well-defined confidence levels for dominating systematic effects.
Nguyen, Vanthan; Yan, Lihe Si, Jinhai; Hou, Xun
2015-02-28
Photoluminescent carbon nanodots (C-dots) with size tunability and uniformity were fabricated in polyethylene glycol (PEG{sub 200N}) solution using femtosecond laser ablation method. The size distributions and photoluminescence (PL) properties of C-dots are well controlled by adjusting the combined parameters of laser fluence, spot size, and irradiation time. The size reduction efficiency of the C-dots progressively increases with decreasing laser fluence and spot size. The optimal PL spectra are red-shifted and the quantum yields decrease with the increase in C-dots size, which could be attributed to the more complex surface functional groups attached on C-dots induced at higher laser fluence and larger spot size. Moreover, an increase in irradiation time leads to a decrease in size of C-dots, but long-time irradiation will result in the generation of complex functional groups on C-dots, subsequently the PL spectra are red-shifted.
Ultrasonic Spot Welding of AZ31B to Galvanized Mild Steel
Pan, Dr. Tsung-Yu; Franklin, Teresa; Pan, Professor Jwo; Brown, Elliot; Santella, Michael L
2010-01-01
Ultrasonic spot welds were made between sheets of 0.8-mm-thick hot-dip-galvanized mild steel and 1.6-mm-thick AZ31B-H24. Lap-shear strengths of 3.0-4.2 kN were achieved with weld times of 0.3-1.2 s. Failure to achieve strong bonding of joints where the Zn coating was removed from the steel surface indicate that Zn is essential to the bonding mechanism. Microstructure characterization and microchemical analysis indicated temperatures at the AZ31-steel interfaces reached at least 344 C in less than 0.3 s. The elevated temperature conditions promoted annealing of the AZ31-H24 metal and chemical reactions between it and the Zn coating.
On the development of nugget growth model for resistance spot welding
Zhou, Kang, E-mail: zhoukang326@126.com, E-mail: melcai@ust.hk; Cai, Lilong, E-mail: zhoukang326@126.com, E-mail: melcai@ust.hk [Department of Mechanical and Aerospace Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong)
2014-04-28
In this paper, we developed a general mathematical model to estimate the nugget growth process based on the heat energy delivered into the welds by the resistance spot welding. According to the principles of thermodynamics and heat transfer, and the effect of electrode force during the welding process, the shape of the nugget can be estimated. Then, a mathematical model between heat energy absorbed and nugget diameter can be obtained theoretically. It is shown in this paper that the nugget diameter can be precisely described by piecewise fractal polynomial functions. Experiments were conducted with different welding operation conditions, such as welding currents, workpiece thickness, and widths, to validate the model and the theoretical analysis. All the experiments confirmed that the proposed model can predict the nugget diameters with high accuracy based on the input heat energy to the welds.
Impact of tool wear on joint strength in friction stir spot welding of DP 980 steel
Miles, Michael; Ridges, Chris; Hovanski, Yuri; Peterson, Jeremy; Santella, M. L.; Steel, Russel
2011-09-14
Friction stir spot welding has been shown to be a viable method of joining ultra high strength steel (UHSS), both in terms of joint strength and process cycle time. However, the cost of tooling must be reasonable in order for this method to be adopted as an industrial process. Recently a new tool alloy has been developed, using a blend of PCBN and tungsten rhenium (W-Re) in order to improve the toughness of the tool. Wear testing results are presented for two of these alloys: one with a composition of 60% PCBN and 40% W-Re, and one with 70% PCBN and 30% W-Re. The sheet material used for all wear testing was 1.4 mm DP 980. Lap shear testing was used to show the relationship between tool wear and joint strength. The Q70 tool provided the best combination of wear resistance and joint strength.
Crystal structure of Spot 14, a modulator of fatty acid synthesis
Colbert, Christopher L.; Kim, Chai-Wan; Moon, Young-Ah; Henry, Lisa; Palnitkar, Maya; McKean, William B.; Fitzgerald, Kevin; Deisenhofer, Johann; Horton, Jay D.; Kwon, Hyock Joo
2011-09-06
Spot 14 (S14) is a protein that is abundantly expressed in lipogenic tissues and is regulated in a manner similar to other enzymes involved in fatty acid synthesis. Deletion of S14 in mice decreased lipid synthesis in lactating mammary tissue, but the mechanism of S14's action is unknown. Here we present the crystal structure of S14 to 2.65 {angstrom} and biochemical data showing that S14 can form heterodimers with MIG12. MIG12 modulates fatty acid synthesis by inducing the polymerization and activity of acetyl-CoA carboxylase, the first committed enzymatic reaction in the fatty acid synthesis pathway. Coexpression of S14 and MIG12 leads to heterodimers and reduced acetyl-CoA carboxylase polymerization and activity. The structure of S14 suggests a mechanism whereby heterodimer formation with MIG12 attenuates the ability of MIG12 to activate ACC.
Electric rate that shifts hourly may foretell spot-market kWh
Springer, N.
1985-11-25
Four California industrial plants have cut their electricity bills up to 16% by shifting from the traditional time-of-use rates to an experimental real-time program (RTP) that varies prices hourly. The users receive a price schedule reflecting changing generating costs one day in advance to encourage them to increase power consumption during the cheapest time periods. Savings during the pilot program range between $11,000 and $32,000 per customer. The hourly cost breakdown encourages consumption during the night and early morning. The signalling system could be expanded to cogenerators and independent small power producers. If an electricity spot market develops, forecasters think a place on the stock exchanges for future-delivery contracts could develop in the future.
Landsat and SPOT data for oil exploration in North-Western China
Nishidai, Takashi
1996-07-01
Satellite remote sensing technology has been employed by Japex to provide information related to oil exploration programs for many years. Since the beginning of the 1980`s, regional geological interpretation through to advanced studies using satellite imagery with high spectral and spatial resolutions (such as Landsat TM and SPOT HRV), have been carried out, for both exploration programs and for scientific research. Advanced techniques (including analysis of airborne hyper-multispectral imaging sensor data) as well as conventional photogeological techniques were used throughout these programs. The first program using remote sensing technology in China focused on the Tarim Basin, Xinjiang Uygur Autonomous Region, and was carried out using Landsat MSS data. Landsat MSS imagery allows us to gain useful preliminary geological information about an area of interest, prior to field studies. About 90 Landsat scenes cover the entire Xinjiang Uygru Autonomous Region, this allowed us to give comprehensive overviews of 3 hydrocarbon-bearing basins (Tarim, Junggar, and Turpan-Hami) in NW China. The overviews were based on the interpretations and assessments of the satellite imagery and on a synthesis of the most up-to-date accessible geological and geophysical data as well as some field works. Pairs of stereoscopic SPOT HRV images were used to generate digital elevation data with a 40 in grid cover for part of the Tarim Basin. Topographic contour maps, created from this digital elevation data, at scales of 1:250,000 and 1:100,000 with contour intervals of 100 m and 50 m, allowed us to make precise geological interpretation, and to carry out swift and efficient geological field work. Satellite imagery was also utilized to make medium scale to large scale image maps, not only to interpret geological features but also to support field workers and seismic survey field operations.
Heat-affected zone liquation crack on resistance spot welded TWIP steels
Saha, Dulal Chandra [Department of Advanced Materials Engineering, Dong-Eui University, 995 Eomgwangno, Busanjin-gu, Busan 614-714 (Korea, Republic of); Chang, InSung [Automotive Production Development Division, Hyundai Motor Company (Korea, Republic of); Park, Yeong-Do, E-mail: ypark@deu.ac.kr [Department of Advanced Materials Engineering, Dong-Eui University, 995 Eomgwangno, Busanjin-gu, Busan 614-714 (Korea, Republic of)
2014-07-01
In this study, the heat affected zone (HAZ) liquation crack and segregation behavior of the resistance spot welded twinning induced plasticity (TWIP) steel have been reported. Cracks appeared in the post-welded joints that originated at the partially melted zone (PMZ) and propagated from the PMZ through the heat affected zone (HAZ) to the base metal (BM). The crack length and crack opening widths were observed increasing with heat input; and the welding current was identified to be the most influencing parameter for crack formation. Cracks appeared at the PMZ when nugget diameter reached at 4.50 mm or above; and the liquation cracks were found to occur along two sides of the notch tip in the sheet direction rather than in the electrode direction. Cracks were backfilled with the liquid films which has lamellar structure and supposed to be the eutectic constituent. Co-segregation of alloy elements such as, C and Mn were detected on the liquid films by electron-probe microanalysis (EPMA) line scanning and element map which suggests that the liquid film was enrich of Mn and C. The eutectic constituent was identified by analyzing the calculated phase diagram along with thermal temperature history of finite element simulation. Preliminary experimental results showed that cracks have less/no significant effect on the static cross-tensile strength (CTS) and the tensile-shear strength (TSS). In addition, possible ways to avoid cracking were discussed. - Highlights: The HAZ liquation crack during resistance spot welding of TWIP steel was examined. Cracks were completely backfilled and healed with divorced eutectic secondary phase. Co-segregation of C and Mn was detected in the cracked zone. Heat input was the most influencing factor to initiate liquation crack. Cracks have less/no significant effect on static tensile properties.
Cranmer-Sargison, G.; Weston, S.; Evans, J. A.; Sidhu, N. P.; Thwaites, D. I.
2011-12-15
Purpose: The goal of this work was to implement a recently proposed small field dosimetry formalism [Alfonso et al., Med. Phys. 35(12), 5179-5186 (2008)] for a comprehensive set of diode detectors and provide the required Monte Carlo generated factors to correct measurement. Methods: Jaw collimated square small field sizes of side 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, and 3.0 cm normalized to a reference field of 5.0 cm x 5.0 cm were used throughout this study. Initial linac modeling was performed with electron source parameters at 6.0, 6.1, and 6.2 MeV with the Gaussian FWHM decreased in steps of 0.010 cm from 0.150 to 0.100 cm. DOSRZnrc was used to develop models of the IBA stereotactic field diode (SFD) as well as the PTW T60008, T60012, T60016, and T60017 field diodes. Simulations were run and isocentric, detector specific, output ratios (OR{sub det}) calculated at depths of 1.5, 5.0, and 10.0 cm. This was performed using the following source parameter subset: 6.1 and 6.2 MeV with a FWHM = 0.100, 0.110, and 0.120 cm. The source parameters were finalized by comparing experimental detector specific output ratios with simulation. Simulations were then run with the active volume and surrounding materials set to water and the replacement correction factors calculated according to the newly proposed formalism. Results: In all cases, the experimental field size widths (at the 50% level) were found to be smaller than the nominal, and therefore, the simulated field sizes were adjusted accordingly. At a FWHM = 0.150 cm simulation produced penumbral widths that were too broad. The fit improved as the FWHM was decreased, yet for all but the smallest field size worsened again at a FWHM = 0.100 cm. The simulated OR{sub det} were found to be greater than, equivalent to and less than experiment for spot size FWHM = 0.100, 0.110, and 0.120 cm, respectively. This is due to the change in source occlusion as a function of FWHM and field size. The corrections required for the 0.5 cm field size were 0.95 ({+-}1.0%) for the SFD, T60012 and T60017 diodes and 0.90 ({+-}1.0%) for the T60008 and T60016 diodes--indicating measured output ratios to be 5% and 10% high, respectively. Our results also revealed the correction factors to be the same within statistical variation at all depths considered. Conclusions: A number of general conclusions are evident: (1) small field OR{sub det} are very sensitive to the simulated source parameters, and therefore, rigorous Monte Carlo linac model commissioning, with respect to measurement, must be pursued prior to use, (2) backscattered dose to the monitor chamber should be included in simulated OR{sub det} calculations, (3) the corrections required for diode detectors are design dependent and therefore detailed detector modeling is required, and (4) the reported detector specific correction factors may be applied to experimental small field OR{sub det} consistent with those presented here.
Zhang, Y; Giebeler, A; Mascia, A; Piskulich, F; Perles, L; Lepage, R; Dong, L
2014-06-01
Purpose: To quantitatively evaluate dosimetric consequence of spot size variations and validate beam-matching criteria for commissioning a pencil beam model for multiple treatment rooms. Methods: A planning study was first conducted by simulating spot size variations to systematically evaluate dosimetric impact of spot size variations in selected cases, which was used to establish the in-air spot size tolerance for beam matching specifications. A beam model in treatment planning system was created using in-air spot profiles acquired in one treatment room. These spot profiles were also acquired from another treatment room for assessing the actual spot size variations between the two treatment rooms. We created twenty five test plans with targets of different sizes at different depths, and performed dose measurement along the entrance, proximal and distal target regions. The absolute doses at those locations were measured using ionization chambers at both treatment rooms, and were compared against the calculated doses by the beam model. Fifteen additional patient plans were also measured and included in our validation. Results: The beam model is relatively insensitive to spot size variations. With an average of less than 15% measured in-air spot size variations between two treatment rooms, the average dose difference was ?0.15% with a standard deviation of 0.40% for 55 measurement points within target region; but the differences increased to 1.4%1.1% in the entrance regions, which are more affected by in-air spot size variations. Overall, our single-room based beam model in the treatment planning system agreed with measurements in both rooms < 0.5% within the target region. For fifteen patient cases, the agreement was within 1%. Conclusion: We have demonstrated that dosimetrically equivalent machines can be established when in-air spot size variations are within 15% between the two treatment rooms.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs
Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.
2011-11-11
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Effects of self-seeding and crystal post-selection on the quality of Monte
Office of Scientific and Technical Information (OSTI)
Carlo-integrated SFX data (Journal Article) | SciTech Connect Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data Citation Details In-Document Search Title: Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data Abstract is not provided Authors: Barends, Thomas ; White, Thomas A. ; Barty, Anton ; Foucar, Lutz ; Messerschmidt, Marc ; Alonso-Mori, Roberto [1] ; Botha, Sabine ; Chapman, Henry ; Doak,
A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for
Office of Scientific and Technical Information (OSTI)
Electron Dose Calculations. (Conference) | SciTech Connect A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Citation Details In-Document Search Title: A Geant4 Implementation of a Novel Single-Event Monte Carlo Method for Electron Dose Calculations. Abstract not provided. Authors: Franke, Brian Claude ; Dixon, David A. ; Prinja, Anil K. Publication Date: 2013-11-01 OSTI Identifier: 1118160 Report Number(s): SAND2013-9631C 481400 DOE Contract
CASL-U-2015-0247-000 The OpenMC Monte Carlo Particle Transport Code
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
7-000 The OpenMC Monte Carlo Particle Transport Code Pablo Ducru, Jon Walsh Will Boyd, Sam Shaner, Sterling Harper, Colin Josey, Matthew Ellis, Nich Horelik, Benoit Forget, Kord Smith Massachusetts Institute of Technology Bryan Herman Knolls Atomic Power Laboratory Paul Romano Argonne National Laboratory July 7, 2015 CASL-U-2015-0247-000 The OpenMC Monte Carlo Particle Transport Code Pablo Ducru 1 , Jon Walsh 1 , Will Boyd 1 , Sam Shaner 1 , Sterling Harper 1 , Colin Josey 1 , Matthew Ellis 1 ,
Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Journal Article: Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage Citation Details In-Document Search Title: Zori 1.0: A Parallel Quantum Monte Carlo Electronic StructurePackage No abstract prepared. Authors: Aspuru-Guzik, Alan ; Salomon-Ferrer, Romelia ; Austin, Brian ; Perusquia-Flores, Raul ; Griffin, Mary A. ; Oliva, Ricardo A. ; Skinner,David ; Dominik,Domin ; Lester Jr., William A. Publication Date: 2004-12-17 OSTI Identifier:
High explosive spot test analyses of samples from Operable Unit (OU) 1111
McRae, D.; Haywood, W.; Powell, J.; Harris, B.
1995-01-01
A preliminary evaluation has been completed of environmental contaminants at selected sites within the Group DX-10 (formally Group M-7) area. Soil samples taken from specific locations at this detonator facility were analyzed for harmful metals and screened for explosives. A sanitary outflow, a burn pit, a pentaerythritol tetranitrate (PETN) production outflow field, an active firing chamber, an inactive firing chamber, and a leach field were sampled. Energy dispersive x-ray fluorescence (EDXRF) was used to obtain semi-quantitative concentrations of metals in the soil. Two field spot-test kits for explosives were used to assess the presence of energetic materials in the soil and in items found at the areas tested. PETN is the major explosive in detonators manufactured and destroyed at Los Alamos. No measurable amounts of PETN or other explosives were detected in the soil, but items taken from the burn area and a high-energy explosive (HE)/chemical sump were contaminated. The concentrations of lead, mercury, and uranium are given.
Statistical Analysis of Microarray Data with Replicated Spots: A Case Study withSynechococcusWH8102
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.
2009-01-01
Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmoreto the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.less
DIESEL TRUCK IDLING EMISSIONS - MEASUREMENTS AT A PM2.5 HOT SPOT
Parks, II, James E; Miller, Terry L.; Storey, John Morse; Fu, Joshua S.; Hromis, Boris
2007-01-01
The University of Tennessee and Oak Ridge National Laboratory conducted a 5-month long air monitoring study at the Watt Road interchange on I-40 in Knoxville Tennessee where there are 20,000 heavy-duty trucks per day traveling the interstate. In addition, there are 3 large truck stops at this interchange where as many as 400 trucks idle engines at night. As a result, high levels of PM2.5 were measured near the interchange often exceeding National Ambient Air Quality Standards. This paper presents the results of the air monitoring study illustrating the hourly, day-of-week, and seasonal patterns of PM2.5 resulting from diesel truck emissions on the interstate and at the truck stops. Surprisingly, most of the PM2.5 concentrations occurred during the night when the largest contribution of emissions was from idling trucks rather than trucks on the interstate. A nearby background air monitoring site was used to identify the contribution of regional PM2.5 emissions which also contribute significantly to the concentrations measured at the site. The relative contributions of regional background, local truck idling and trucks on the interstate to local PM2.5 concentrations are presented and discussed in the paper. The results indicate the potential significance of diesel truck idling emissions to the occurrence of hot-spots of high PM2.5 concentrations near large truck stops, ports or border crossings.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.
2009-01-01
Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore » to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less
Green's function Monte Carlo calculation for the ground state of helium trimers
Cabral, F.; Kalos, M.H.
1981-02-01
The ground state energy of weakly bound boson trimers interacting via Lennard-Jones (12,6) pair potentials is calculated using a Monte Carlo Green's Function Method. Threshold coupling constants for self binding are obtained by extrapolation to zero binding.
Alcouffe, R.E.
1985-01-01
A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.
K-effective of the world: and other concerns for Monte Carlo Eigenvalue calculations
Brown, Forrest B
2010-01-01
Monte Carlo methods have been used to compute k{sub eff} and the fundamental model eigenfunction of critical systems since the 1950s. Despite the sophistication of today's Monte Carlo codes for representing realistic geometry and physics interactions, correct results can be obtained in criticality problems only if users pay attention to source convergence in the Monte Carlo iterations and to running a sufficient number of neutron histories to adequately sample all significant regions of the problem. Recommended best practices for criticality calculations are reviewed and applied to several practical problems for nuclear reactors and criticality safety, including the 'K-effective of the World' problem. Numerical results illustrate the concerns about convergence and bias. The general conclusion is that with today's high-performance computers, improved understanding of the theory, new tools for diagnosing convergence (e.g., Shannon entropy of the fission distribution), and clear practical guidance for performing calculations, practitioners will have a greater degree of confidence than ever of obtaining correct results for Monte Carlo criticality calculations.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Energy Science and Technology Software Center (OSTI)
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
MacGregor, P.R.
1989-01-01
The National Energy Act, in general, and Section 210 of the Public Utilities Regulatory Policies Act (PURPA) of 1978 in particular, have dramatically stimulated increasing levels of independent non-utility power generation. As these levels of independent non-utility power generation increase, the electric utility is subjected to new and significant operational and financial impacts. One important concern is the net revenue impact on the utility which is the focus of the research discussed in this thesis and which is inextricably intertwined with the operational functions of the utility system. In general, non-utility generation, and specifically, cogeneration, impact utility revenues by affecting the structure and magnitude of the system load, the scheduling of utility generation, and the reliability of the composite system. These effects are examined by developing a comprehensive model non-utility independent power producing facilities, referenced as Small Power Producing Facilities, a cash-flow-based corporate model of the electric utility, a thermal plant based generation scheduling algorithm, and a system reliability evaluation. All of these components are integrated into an iterative closed loop solution algorithm to both assess and enhance the net revenue. In this solution algorithm, the spot pricing policy of the utility is the principal control mechanism in the process and the system reliability is the primary procedural constraint. A key issue in reducing the negative financial impact of non-utility generation is the possibility of shutting down utility generation units given sufficient magnitudes of non-utility generation in the system. A case study simulating the financial and system operations of the Georgia Power Company with representative cogeneration capacity and individual plant characteristics is analyzed in order to demonstrate the solution process.
The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy
Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew
2013-04-15
Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.
Howard, M; Beltran, C; Herman, M
2014-06-01
Purpose: To investigate the influence of the minimum monitor unit (MU) on the quality of clinical treatment plans for scanned proton therapy. Methods: Delivery system characteristics limit the minimum number of protons that can be delivered per spot, resulting in a min-MU limit. Plan quality can be impacted by the min-MU limit. Two sites were used to investigate the impact of min-MU on treatment plans: pediatric brain tumor at a depth of 5-10 cm; a head and neck tumor at a depth of 1-20 cm. Three field intensity modulated spot scanning proton plans were created for each site with the following parameter variations: min-MU limit range of 0.0000-0.0060; and spot spacing range of 0.5-2.0σ of the nominal spot size at isocenter in water (σ=4mm in this work). Comparisons were based on target homogeneity and normal tissue sparing. Results: The increase of the min-MU with a fixed spot spacing decreases plan quality both in homogeneous target coverage and in the avoidance of critical structures. Both head and neck and pediatric brain plans show a 20% increase in relative dose for the hot spot in the CTV and 10% increase in key critical structures when comparing min-MU limits of 0.0000 and 0.0060 with a fixed spot spacing of 1σ. The DVHs of CTVs show min-MU limits of 0.0000 and 0.0010 produce similar plan quality and quality decreases as the min-MU limit increases beyond 0.0020. As spot spacing approaches 2σ, degradation in plan quality is observed when no min-MU limit is imposed. Conclusion: Given a fixed spot spacing of ≤ 1σ of the spot size in water, plan quality decreases as min- MU increases greater than 0.0020. The effect of min-MU should be taken into consideration while planning spot scanning proton therapy treatments to realize its full potential.
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.
Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport and CFD solutions. Previous researchers have successfully performed Monte Carlo calculations with limited thermal feedback. In fact, much of the validation of the deterministic neutronics transport code DeCART in was performed using the Monte Carlo code McCARD which employs a limited thermal feedback model. However, for a broader range of temperature/fluid applications it was desirable to couple Monte Carlo to a more sophisticated temperature fluid solution such as CFD. This paper focuses on the methods used to couple Monte Carlo to CFD and their application to a series of simple test problems.
Bauge, E.
2015-01-15
The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
Miura, Shinichi [Institute for Molecular Science, 38 Myodaiji, Okazaki 444-8585 (Japan)
2007-03-21
In this paper, we present a path integral hybrid Monte Carlo (PIHMC) method for rotating molecules in quantum fluids. This is an extension of our PIHMC for correlated Bose fluids [S. Miura and J. Tanaka, J. Chem. Phys. 120, 2160 (2004)] to handle the molecular rotation quantum mechanically. A novel technique referred to be an effective potential of quantum rotation is introduced to incorporate the rotational degree of freedom in the path integral molecular dynamics or hybrid Monte Carlo algorithm. For a permutation move to satisfy Bose statistics, we devise a multilevel Metropolis method combined with a configurational-bias technique for efficiently sampling the permutation and the associated atomic coordinates. Then, we have applied the PIHMC to a helium-4 cluster doped with a carbonyl sulfide molecule. The effects of the quantum rotation on the solvation structure and energetics were examined. Translational and rotational fluctuations of the dopant in the superfluid cluster were also analyzed.
Turrell, A.E. Sherlock, M.; Rose, S.J.
2015-10-15
Large-angle Coulomb collisions allow for the exchange of a significant proportion of the energy of a particle in a single collision, but are not included in models of plasmas based on fluids, the Vlasov–Fokker–Planck equation, or currently available plasma Monte Carlo techniques. Their unique effects include the creation of fast ‘knock-on’ ions, which may be more likely to undergo certain reactions, and distortions to ion distribution functions relative to what is predicted by small-angle collision only theories. We present a computational method which uses Monte Carlo techniques to include the effects of large-angle Coulomb collisions in plasmas and which self-consistently evolves distribution functions according to the creation of knock-on ions of any generation. The method is used to demonstrate ion distribution function distortions in an inertial confinement fusion (ICF) relevant scenario of the slowing of fusion products.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Willert, Jeffrey Park, H.
2014-11-01
In this article we explore the possibility of replacing Standard Monte Carlo (SMC) transport sweeps within a Moment-Based Accelerated Thermal Radiative Transfer (TRT) algorithm with a Residual Monte Carlo (RMC) formulation. Previous Moment-Based Accelerated TRT implementations have encountered trouble when stochastic noise from SMC transport sweeps accumulates over several iterations and pollutes the low-order system. With RMC we hope to significantly lower the build-up of statistical error at a much lower cost. First, we display encouraging results for a zero-dimensional test problem. Then, we demonstrate that we can achieve a lower degree of error in two one-dimensional test problems by employing an RMC transport sweep with multiple orders of magnitude fewer particles per sweep. We find that by reformulating the high-order problem, we can compute more accurate solutions at a fraction of the cost.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage. Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tringe, J. W.; Ileri, N.; Levie, H. W.; Stroeve, P.; Ustach, V.; Faller, R.; Renaud, P.
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
In the OSTI Collections: Monte Carlo Methods | OSTI, US Dept of Energy,
Office of Scientific and Technical Information (OSTI)
Office of Scientific and Technical Information Monte Carlo Methods "The first thoughts and attempts I made ... were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
Application of Diffusion Monte Carlo to Materials Dominated by van der Waals Interactions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Benali, Anouar; Shulenburger, Luke; Romero, Nichols A.; Kim, Jeongnim; von Lilienfeld, O. Anatole
2014-06-12
Van der Waals forces are notoriously difficult to account for from first principles. We perform extensive calculation to assess the usefulness and validity of diffusion quantum Monte Carlo when applied to van der Waals forces. We present results for noble gas solids and clusters - archetypical van der Waals dominated assemblies, as well as a relevant pi-pi stacking supramolecular complex: DNA + intercalating anti-cancer drug Ellipticine.
Fully Differential Monte-Carlo Generator Dedicated to TMDs and Bessel-Weighted Asymmetries
Aghasyan, Mher M.; Avakian, Harut A.
2013-10-01
We present studies of double longitudinal spin asymmetries in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator, which includes quark intrinsic transverse momentum within the generalized parton model based on the fully differential cross section for the process. Additionally, we apply Bessel-weighting to the simulated events to extract transverse momentum dependent parton distribution functions and also discuss possible uncertainties due to kinematic correlation effects.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 ; Ji, Weixiao; Blaisten-Barojas, Estela; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors
Qiang, Ji; Corlett, John; Staples, John
2009-03-02
In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H{sub 2} ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns.
Fullrmc, A Rigid Body Reverse Monte Carlo Modeling Package Enabled With
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Machine Learning And Artificial Intelligence - Joint Center for Energy Storage Research January 22, 2016, Research Highlights Fullrmc, A Rigid Body Reverse Monte Carlo Modeling Package Enabled With Machine Learning And Artificial Intelligence Liquid Sulfur. Sx≤8 molecules recognized and built upon modelling Scientific Achievement Novel approach to reverse modelling atomic and molecular systems from a set of experimental data and constraints. New fitting concepts such as 'Group',
Prez-Andjar, Anglica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)
2013-12-15
Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Miles, Michael; Karki, U.; Hovanski, Yuri
2014-10-01
Friction-stir spot welding (FSSW) has been shown to be capable of joining advanced high-strength steel, with its flexibility in controlling the heat of welding and the resulting microstructure of the joint. This makes FSSW a potential alternative to resistance spot welding if tool life is sufficiently high, and if machine spindle loads are sufficiently low that the process can be implemented on an industrial robot. Robots for spot welding can typically sustain vertical loads of about 8 kN, but FSSW at tool speeds of less than 3000 rpm cause loads that are too high, in the range of 1114 kN. Therefore, in the current work, tool speeds of 5000 rpm were employed to generate heat more quickly and to reduce welding loads to acceptable levels. Si3N4 tools were used for the welding experiments on 1.2-mm DP 980 steel. The FSSW process was modeled with a finite element approach using the Forge* software. An updated Lagrangian scheme with explicit time integration was employed to predict the flow of the sheet material, subjected to boundary conditions of a rotating tool and a fixed backing plate. Material flow was calculated from a velocity field that is two-dimensional, but heat generated by friction was computed by a novel approach, where the rotational velocity component imparted to the sheet by the tool surface was included in the thermal boundary conditions. An isotropic, viscoplastic Norton-Hoff law was used to compute the material flow stress as a function of strain, strain rate, and temperature. The model predicted welding temperatures to within percent, and the position of the joint interface to within 10 percent, of the experimental results.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Gray, Michael A.; Andersen, Gary L.; Mormile, Melanie R.
2014-10-07
Coral disease is one of the major causes of reef degradation. Dark Spot Syndrome (DSS) was described in the early 1990's as brown or purple amorphous areas of tissue on a coral and has since become one of the most prevalent diseases reported on Caribbean reefs. It has been identified in a number of coral species, but there is debate as to whether it is in fact the same disease in different corals. Further, it is questioned whether these macroscopic signs are in fact diagnostic of an infectious disease at all. The most commonly affected species in the Caribbean ismore » the massive starlet coral Siderastrea siderea. We sampled this species in two locations, Dry Tortugas National Park and Virgin Islands National Park. Tissue biopsies were collected from both healthy colonies and those with dark spot lesions. Microbial-community DNA was extracted from coral samples (mucus, tissue, and skeleton), amplified using bacterial-specific primers, and applied to PhyloChip G3 microarrays to examine the bacterial diversity associated with this coral. Samples were also screened for the presence of a fungal ribotype that has recently been implicated as a causative agent of DSS in another coral species, but the amplifications were unsuccessful. S. siderea samples did not cluster consistently based on health state (i.e., normal versus dark spot). Various bacteria, including Cyanobacteria and Vibrios, were observed to have increased relative abundance in the discolored tissue, but the patterns were not consistent across all DSS samples. Overall, our findings do not support the hypothesis that DSS in S. siderea is linked to a bacterial pathogen or pathogens. This dataset provides the most comprehensive overview to date of the bacterial community associated with the scleractinian coral S. siderea.« less
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
NONE
1998-01-01
The Bear Creek Valley Floodplain Hot Spot Removal Action Project Plan, Oak Ridge Y-12 Plant, Oak Ridge, Tennessee (Y/ER-301) was prepared (1) to safely, cost-effectively, and efficiently evaluate the environmental impact of solid material in the two debris areas in the context of industrial land uses (as defined in the Bear Creek Valley Feasibility Study) to support the Engineering Evaluation/Cost Assessment and (2) to evaluate, define, and implement the actions to mitigate these impacts. This work was performed under Work Breakdown Structure 1.x.01.20.01.08.
Grosshans, David R.; Zhu, X. Ronald; Melancon, Adam; Allen, Pamela K.; Poenisch, Falk; Palmer, Matthew; McAleer, Mary Frances; McGovern, Susan L.; Gillin, Michael; DeMonte, Franco; Chang, Eric L.; Brown, Paul D.; Mahajan, Anita
2014-11-01
Purpose: To describe treatment planning techniques and early clinical outcomes in patients treated with spot scanning proton therapy for chordoma or chondrosarcoma of the skull base. Methods and Materials: From June 2010 through August 2011, 15 patients were treated with spot scanning proton therapy for chordoma (n=10) or chondrosarcoma (n=5) at a single institution. Toxicity was prospectively evaluated and scored weekly and at all follow-up visits according to Common Terminology Criteria for Adverse Events, version 3.0. Treatment planning techniques and dosimetric data were recorded and compared with those of passive scattering plans created with clinically applicable dose constraints. Results: Ten patients were treated with single-field-optimized scanning beam plans and 5 with multifield-optimized intensity modulated proton therapy. All but 2 patients received a simultaneous integrated boost as well. The mean prescribed radiation doses were 69.8 Gy (relative biological effectiveness [RBE]; range, 68-70 Gy [RBE]) for chordoma and 68.4 Gy (RBE) (range, 66-70) for chondrosarcoma. In comparison with passive scattering plans, spot scanning plans demonstrated improved high-dose conformality and sparing of temporal lobes and brainstem. Clinically, the most common acute toxicities included fatigue (grade 2 for 2 patients, grade 1 for 8 patients) and nausea (grade 2 for 2 patients, grade 1 for 6 patients). No toxicities of grades 3 to 5 were recorded. At a median follow-up time of 27 months (range, 13-42 months), 1 patient had experienced local recurrence and a second developed distant metastatic disease. Two patients had magnetic resonance imaging-documented temporal lobe changes, and a third patient developed facial numbness. No other subacute or late effects were recorded. Conclusions: In comparison to passive scattering, treatment plans for spot scanning proton therapy displayed improved high-dose conformality. Clinically, the treatment was well tolerated, and with short-term follow-up, disease control rates and toxicity profiles were favorable.
Lillaney, Prasheel; Shin, Mihye; Conolly, Steven M.; Fahrig, Rebecca
2012-09-15
Purpose: Combining x-ray fluoroscopy and MR imaging systems for guidance of interventional procedures has become more commonplace. By designing an x-ray tube that is immune to the magnetic fields outside of the MR bore, the two systems can be placed in close proximity to each other. A major obstacle to robust x-ray tube design is correcting for the effects of the magnetic fields on the x-ray tube focal spot. A potential solution is to design active shielding that locally cancels the magnetic fields near the focal spot. Methods: An iterative optimization algorithm is implemented to design resistive active shielding coils that will be placed outside the x-ray tube insert. The optimization procedure attempts to minimize the power consumption of the shielding coils while satisfying magnetic field homogeneity constraints. The algorithm is composed of a linear programming step and a nonlinear programming step that are interleaved with each other. The coil results are verified using a finite element space charge simulation of the electron beam inside the x-ray tube. To alleviate heating concerns an optimized coil solution is derived that includes a neodymium permanent magnet. Any demagnetization of the permanent magnet is calculated prior to solving for the optimized coils. The temperature dynamics of the coil solutions are calculated using a lumped parameter model, which is used to estimate operation times of the coils before temperature failure. Results: For a magnetic field strength of 88 mT, the algorithm solves for coils that consume 588 A/cm{sup 2}. This specific coil geometry can operate for 15 min continuously before reaching temperature failure. By including a neodymium magnet in the design the current density drops to 337 A/cm{sup 2}, which increases the operation time to 59 min. Space charge simulations verify that the coil designs are effective, but for oblique x-ray tube geometries there is still distortion of the focal spot shape along with deflections of approximately 3 mm in the radial and circumferential directions on the anode. Conclusions: Active shielding is an attractive solution for correcting the effects of magnetic fields on the x-ray focal spot. If extremely long fluoroscopic exposure times are required, longer operation times can be achieved by including a permanent magnet with the active shielding design.
Signal processing Model/Method for Recovering Acoustic Reflectivity of Spot Weld
Energy Science and Technology Software Center (OSTI)
2005-09-08
Until recently, U.S. auto manufacturers have inspected the veracity of welds in the auto bodies they build by using destructive tear-down, which typically results in more than $1 M of scrappage per plant per year. Much of this expense could possibly be avoided with a nondestructive technique (and 100% instead of 1% inspection could be achieved). Recent advances in ultrasound probes promise to provide a sufficiently accurate non-destructive evaluation technique, but the necessary signal processingmore » has not yet been developed. This disclosure describes a signal processing model and method useful for diagnosing the veracity of spot welds between two sheets of the same thickness from ultrasound signals Standard systems theory describes a signal as a convolution of a transducer function, h(t), and an impulse train (beta(t), tau(t)) [1] (see Eq. (1) attached). With a Gaussian wavelet as a transducer function, this model describes the signal from an ultrasound probe quite well, and the literature provides many methods for "deconvolution," for recovery of the impulse train from the signal [see e.g., 2-3]. What is novel about the technique disclosed is the model that describes the impulse train as a function of reflectivity, the share of energy incident on the interface that is reflected, and that allows the recovery of its estimated value. The reflectivity estimate provides an ideal indicator of weld veracity, compressing each signal into a single value between 0 and 1, which can then be displayed as a 2d greyscale or colormap of the weld. The model describing the system is attached as Eqs. (2). These equations account for the energy in the probe-side and opposite sheets. In each period, this energy is a sum of that reflected from the same sheet plus that transmitted from the opposite (dampened by material attenuation at rate a). This model is consistent with physical first principles (in particular the First and Second Laws of Thermodynamics) and has been verified empirically. For fast estimation of R using only observations beta(1, ..., T) a receiver state equation has been derived, and is attached as Eq. (3). This equation has the further advantage that the initial impulse S need not be known, rather it is estimated simultaneously. This is necessary because element failure and coupling can cause large variations in S. Constrained nonlinear least squares techniques can be applied to this equation to recover reflectivity (and initial impulse) [4]. In particular, the Gauss-Newton algorithm on the log of the sum of squared errors based on the receiver state equation is recommended. To summarize, it is the model described in Eqs. (2) and (3) that is novel, and that enables the recovery of acoustic reflectivity from the ultrasound signals. It has been verified that this reflectivity estimate provides a better indicator of weld veracity than other features previously derived from such signals.« less
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms) percentage difference=1.7%; p=0.01), whereas those for the cylindrical phantoms were significantly lower (rms percentage difference=7.7%; p<0.01). Normalized MGDs were found to decrease with increasing glandularity. Conclusions: Our results indicate that it is sufficient to use homogeneous breast models derived from CBCT generated structured breast models to estimate the average dose. This investigation also shows that ellipsoidal digital phantoms of similar dimensions (diameter and height) and glandularity to actual breasts may be used to represent a real breast to estimate the average breast dose with Monte Carlo simulation. We have also successfully demonstrated the use of structured breast models to estimate the true MGDs and shown that the normalized MGDs decreased with the glandularity as previously reported by other researchers for CBBCT or mammography.
TH-C-18A-10: The Influence of Tube Current On X-Ray Focal Spot Size for 70 KV CT Imaging
Duan, X; Grimes, J; Yu, L; Leng, S; McCollough, C
2014-06-15
Purpose: Focal spot blooming is an increase in the focal spot size at increased tube current and/or decreased tube potential. In this work, we evaluated the influence of tube current on the focal spot size at low kV for two CT systems, one of which used a tube designed to reduce blooming effects. Methods: A slit camera (10 micron slit) was used to measure focal spot size on two CT scanners from the same manufacturer (Siemens Somatom Force and Definition Flash) at 70 kV and low, medium and maximum tube currents, according to the capabilities of each system (Force: 100, 800 and 1300 mA; Flash: 100, 200 and 500 mA). Exposures were made with a stationary tube in service mode using a raised stand without table movement or flying focal spot technique. Focal spot size, nominally 0.8 and 1.2 mm, respectively, was measured parallel and perpendicular to the cathode-anode axis by calculating the full-width-at-half-maximum of the slit profile recording using computed radiographic plates. Results: Focal spot sizes perpendicular to the anode-cathode axis increased at the maximum mA by 5.7% on the Force and 39.1% on the Flash relative to that at the minimal mA, even though the mA was increased 13-fold on the Force and only 5- fold on the Flash. Focal spot size increased parallel to the anode-cathode axis by 70.4% on Force and 40.9% on Flash. Conclusion: For CT protocols using low kV, high mA is typically required. These protocols are relevant in children and smaller adults, and for dual-energy scanning. Technical measures to limit focal spot blooming are important in these settings to avoid reduced spatial resolution. The x-ray tube on a recently-introduced scanner appears to greatly reduce blooming effects, even at very high mA values. CHM has research support from Siemens Healthcare.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2007-03-01
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS). DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. The critical fusion zone sizes to ensure nugget pull-out failure mode are developed for both DP800 and TRIP800 using the limit load based analytical model and the micro-hardness measurements of the weld cross sections. Static weld strength tests using cross tension samples were performed on the joint populations with controlled fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied using statistical data analysis tools. The results in this study show that the conventional weld size of 4 t1/2 can not produce nugget pullout mode for both the DP800 and TRIP800 materials. The results also suggest that performance based spot weld acceptance criteria should be developed for different AHSS spot welds.
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS configurations have a resolution of 7 microns or better. The 28 m LOS with a 7 x 7 array of 100-micron mini-penumbral apertures or 50-micron square pinholes meets the design requirements and is a very good design alternative.
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Study of DCX reaction on medium nuclei with Monte-Carlo Shell Model
Wu, H. C.; Gibbs, W. R.
2010-08-04
In this work a method is introduced to calculate the DCX reaction in the framework of Monte-Carlo Shell Model (MCSM). To facilitate the use of Zero-temperature formalism of MCSM, the Double-Isobaric-Analog State (DIAS) is derived from the ground state by using isospin shifting operator. The validity of this method is tested by comparing the MCSM results to those of the SU(3) symmetry case. Application of this method to DCX on {sup 56}Fe and {sup 93}Nb is discussed.
Shafer, J.D.; Shepard, J.R.
1997-04-01
We derive an approximate renormalization group (RG) flow equation for the local effective potential of single-component {phi}{sup 4} field theory at finite temperature. Previous zero-temperature RG equations are recovered in the low- and high-temperature limits, in the latter case, via the phenomenon of dimensional reduction. We numerically solve our RG equations to obtain local effective potentials at finite temperature. These are found to be in excellent agreement with Monte Carlo results, especially when lattice artifacts are accounted for in the RG treatment. {copyright} {ital 1997} {ital The American Physical Society}
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona
2012-01-01
The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona
2012-02-15
The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.
Monte Carlo Fundamentals E B. BROWN and T M. S N
Office of Scientific and Technical Information (OSTI)
Monte Carlo Fundamentals E B. BROWN and T . M. S - N February 1996 Preparedby Lockheed M a r t i n Company KNOLLS ATOMIC POWER LABORATORY Schenectady, New York Contract No. DE-AC12-76-SN-00052 KAPL-4823 UC-32 (DOE/TIC-4500-R75) DISTRlBUTtON OF T H I S DOCUMENT IS UNLIMITED kw Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Gov- ernment. Neither the United States Government nor any agency thereof, nor any of their employ- ees, m a k e s any
Quantized vortices in {sup 4}He droplets: A quantum Monte Carlo study
Sola, E.; Casulleras, J.; Boronat, J.
2007-08-01
We present a diffusion Monte Carlo study of a vortex line excitation attached to the center of a {sup 4}He droplet at zero temperature. The vortex energy is estimated for droplets of increasing number of atoms, from N=70 up to 300, showing a monotonous increase with N. The evolution of the core radius and its associated energy, the core energy, is also studied as a function of N. The core radius is {approx}1 A in the center and increases when approaching the droplet surface; the core energy per unit volume stabilizes at a value 2.8 K{sigma}{sup -3} ({sigma}=2.556 A) for N{>=}200.
Quantum Monte Carlo simulation of a two-dimensional Bose gas
Pilati, S.; Boronat, J.; Casulleras, J.; Giorgini, S.
2005-02-01
The equation of state of a homogeneous two-dimensional Bose gas is calculated using quantum Monte Carlo methods. The low-density universal behavior is investigated using different interatomic model potentials, both finite ranged and strictly repulsive and zero ranged, supporting a bound state. The condensate fraction and the pair distribution function are calculated as a function of the gas parameter, ranging from the dilute to the strongly correlated regime. In the case of the zero-range pseudopotential we discuss the stability of the gaslike state for large values of the two-dimensional scattering length, and we calculate the critical density where the system becomes unstable against cluster formation.
W/Z + b bbar/Jets at NLO Using the Monte Carlo MCFM
John M. Campbell
2001-05-29
We summarize recent progress in next-to-leading QCD calculations made using the Monte Carlo MCFM. In particular, we focus on the calculations of p{bar p} {r_arrow} Wb{bar b}, Zb{bar b} and highlight the significant corrections to background estimates for Higgs searches in the channels WH and ZH at the Tevatron. We also report on the current progress of, and strategies for, the calculation of the process p{bar p} {r_arrow} W/Z + 2 jets.
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study
Alfonso, Dominic R.; Tafen, De Nyago
2015-04-28
The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out for the purpose of understanding the predicted trends.
Monte Carlo generators for studies of the 3D structure of the nucleon
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
White, Glen; Seryi, Andrei; Woodley, Mark; Bai, Sha; Bambade, Philip; Renier, Yves; Bolzon, Benoit; Kamiya, Yoshio; Komamiya, Sachio; Oroku, Masahiro; Yamaguchi, Yohei; Yamanaka, Takashi; Kubo, Kiyoshi; Kuroda, Shigeru; Okugi, Toshiyuki; Tauchi, Toshiaki; Marin, Eduardo; /CERN
2012-07-06
The primary aim of the ATF2 research accelerator is to test a scaled version of the final focus optics planned for use in next-generation linear lepton colliders. ATF2 consists of a 1.3 GeV linac, damping ring providing low-emittance electron beams (< 12pm in the vertical plane), extraction line and final focus optics. The design details of the final focus optics and implementation at ATF2 are presented elsewhere. The ATF2 accelerator is currently being commissioned, with a staged approach to achieving the design IP spot size. It is expected that as we implement more demanding optics and reduce the vertical beta function at the IP, the tuning becomes more difficult and takes longer. We present here a description of the implementation of the tuning procedures and describe operational experiences and performances.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Orth, Charles D.
2016-02-23
We suggest that a potentially dominant but previously neglected source of pusher-fuel and hot-spot “mix” may have been the main degradation mechanism for fusion energy yields of modern inertial confinement fusion (ICF) capsules designed and fielded to achieve high yields — not hydrodynamic instabilities. This potentially dominant mix source is the spallation of small chunks or “grains” of pusher material into the fuel regions whenever (1) the solid material adjacent to the fuel changes its phase by nucleation, and (2) this solid material spalls under shock loading and sudden decompression. Finally, we describe this mix mechanism, support it with simulationsmore » and experimental evidence, and explain how to eliminate it and thereby allow higher yields for ICF capsules and possibly ignition at the National Ignition Facility.« less
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Surface Structures of Cubo-octahedral Pt-Mo Catalyst Nanoparticles from Monte Carlo Simulations
Wang, Guofeng; Van Hove, M.A.; Ross, P.N.; Baskes, M.I.
2005-03-31
The surface structures of cubo-octahedral Pt-Mo nanoparticles have been investigated using the Monte Carlo method and modified embedded atom method potentials that we developed for Pt-Mo alloys. The cubo-octahedral Pt-Mo nanoparticles are constructed with disordered fcc configurations, with sizes from 2.5 to 5.0 nm, and with Pt concentrations from 60 to 90 at. percent. The equilibrium Pt-Mo nanoparticle configurations were generated through Monte Carlo simulations allowing both atomic displacements and element exchanges at 600 K. We predict that the Pt atoms weakly segregate to the surfaces of such nanoparticles. The Pt concentrations in the surface are calculated to be 5 to 14 at. percent higher than the Pt concentrations of the nanoparticles. Moreover, the Pt atoms preferentially segregate to the facet sites of the surface, while the Pt and Mo atoms tend to alternate along the edges and vertices of these nanoparticles. We found that decreasing the size or increasing the Pt concentration leads to higher Pt concentrations but fewer Pt-Mo pairs in the Pt-Mo nanoparticle surfaces.
Monte Carlo analysis of neutron slowing-down-time spectrometer for fast reactor spent fuel assay
Chen, Jianwei; Lineberry, Michael
2007-07-01
Using the neutron slowing-down-time method as a nondestructive assay tool to improve input material accountancy for fast reactor spent fuel reprocessing is under investigation at Idaho State University. Monte Carlo analyses were performed to simulate the neutron slowing down process in different slowing down spectrometers, namely, lead and graphite, and determine their main parameters. {sup 238}U threshold fission chamber response was simulated in the Monte Carlo model to represent the spent fuel assay signals, the signature (fission/time) signals of {sup 235}U, {sup 239}Pu, and {sup 241}Pu were simulated as a convolution of fission cross sections and neutron flux inside the spent fuel. {sup 238}U detector signals were analyzed using linear regression model based on the signatures of fissile materials in the spent fuel to determine weight fractions of fissile materials in the Advanced Burner Test Reactor spent fuel. The preliminary results show even though lead spectrometer showed a better assay performance than graphite, graphite spectrometer could accurately determine weight fractions of {sup 239}Pu and {sup 241}Pu given proper assay energy range were chosen. (authors)
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John; Miller, Eric; Solomon, Clell J. Jr.; Dennis, Ben; Meldrum, Amy; Clarke, Shaun; Pozzi, Sara
2012-06-21
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
Berg, John M.; Veirs, D. Kirk; Vaughn, Randolph B.; Cisneros, Michael R.; Smith, Coleman A.
2000-06-01
Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are {beta}{sub 1}=3.2{+-}0.5 and {beta}{sub 2}=11.2{+-}1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; Abernathy, Douglas L.; Lumsden, Mark D.; Winn, Barry L.; Aczel, Adam A.; Aivazis, Michael; Fultz, Brent
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)] [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)
2013-12-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
Energy Science and Technology Software Center (OSTI)
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
Tsvetkov, Pavel V.; Ames II, David E.; Alajo, Ayodeji B.; Pritchard, Megan L.
2006-07-01
Partitioning and transmutation of minor actinides are expected to have a positive impact on the future of nuclear technology. Their deployment would lead to incineration of hazardous nuclides and could potentially provide additional fuel supply. The U.S. DOE NERI Project assesses the possibility, advantages and limitations of involving minor actinides as a fuel component. The analysis takes into consideration and compares capabilities of actinide-fueled VHTRs with pebble-bed and prismatic cores to approach a reactor lifetime long operation without intermediate refueling. A hybrid Monte Carlo-deterministic methodology has been adopted for coupled neutronics-thermal hydraulics design studies of VHTRs. Within the computational scheme, the key technical issues are being addressed and resolved by implementing efficient automated modeling procedures and sequences, combining Monte Carlo and deterministic approaches, developing and applying realistic 3D coupled neutronics-thermal-hydraulics models with multi-heterogeneity treatments, developing and performing experimental/computational benchmarks for model verification and validation, analyzing uncertainty effects and error propagation. This paper introduces the suggested modeling approach, discusses benchmark results and the preliminary analysis of actinide-fueled VHTRs. The presented up-to-date results are in agreement with the available experimental data. Studies of VHTRs with minor actinides suggest promising performance. (authors)
O'Brien, M J; Brantley, P S
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domains replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Energy density matrix formalism for interacting quantum systems: a quantum Monte Carlo study
Krogel, Jaron T; Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
We develop an energy density matrix that parallels the one-body reduced density matrix (1RDM) for many-body quantum systems. Just as the density matrix gives access to the number density and occupation numbers, the energy density matrix yields the energy density and orbital occupation energies. The eigenvectors of the matrix provide a natural orbital partitioning of the energy density while the eigenvalues comprise a single particle energy spectrum obeying a total energy sum rule. For mean-field systems the energy density matrix recovers the exact spectrum. When correlation becomes important, the occupation energies resemble quasiparticle energies in some respects. We explore the occupation energy spectrum for the finite 3D homogeneous electron gas in the metallic regime and an isolated oxygen atom with ground state quantum Monte Carlo techniques imple- mented in the QMCPACK simulation code. The occupation energy spectrum for the homogeneous electron gas can be described by an effective mass below the Fermi level. Above the Fermi level evanescent behavior in the occupation energies is observed in similar fashion to the occupation numbers of the 1RDM. A direct comparison with total energy differences demonstrates a quantita- tive connection between the occupation energies and electron addition and removal energies for the electron gas. For the oxygen atom, the association between the ground state occupation energies and particle addition and removal energies becomes only qualitative. The energy density matrix provides a new avenue for describing energetics with quantum Monte Carlo methods which have traditionally been limited to total energies.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2007-01-01
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS). DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. The critical fusion zone sizes to ensure nugget pull-out failure mode are developed for both DP800 and TRIP800 using limit load based analytical model and micro-hardness measurements of the weld cross sections. Static weld strength tests using cross tension samples were performed on the joint populations with controlled fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied for all the weld populations using statistical data analysis tools. The results in this study show that AHSS spot welds with fusion zone size of can not produce nugget pullout mode for both the DP800 and TRIP800 materials examined. The critical fusion zone size for nugget pullout shall be derived for individual materials based on different base metal properties as well as different heat affected zone (HAZ) and weld properties resulted from different welding parameters.
Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.
2014-11-15
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
CASL-U-2015-0170-000-a SHIFT: A New Monte Carlo Package Seth R. Johnson
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
-a SHIFT: A New Monte Carlo Package Seth R. Johnson Tara M. Pandya, Gregory G. Davidson, Thomas M. Evans, and Steven P. Hamilton , Cihangir Celik, Aarno Isotalo, Chris Peretti Oak Ridge National Laboratory April 19, 2015 CASL-U-2015-0170-000-a ORNL is managed by UT-Battelle for the U.S. Department of Energy Seth R Johnson R&D Staff, Monte Carlo Methods Radiation Transport Group Exnihilo team: Greg Davidson Tom Evans Stephen Hamilton Seth Johnson Tara Pandya Associate developers: Cihangir
Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.
2006-10-15
By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichman, David R.
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Sun, Xin; Stephens, Elizabeth V.; Khaleel, Mohammad A.
2008-06-01
This paper examines the effects of fusion zone size on failure modes, static strength and energy absorption of resistance spot welds (RSW) of advanced high strength steels (AHSS) under lap shear loading condition. DP800 and TRIP800 spot welds are considered. The main failure modes for spot welds are nugget pullout and interfacial fracture. Partial interfacial fracture is also observed. Static weld strength tests using lap shear samples were performed on the joint populations with various fusion zone sizes. The resulted peak load and energy absorption levels associated with each failure mode were studied for all the weld populations using statistical data analysis tools. The results in this study show that AHSS spot welds with conventionally required fusion zone size of can not produce nugget pullout mode for both the DP800 and TRIP800 welds under lap shear loading. Moreover, failure mode has strong influence on weld peak load and energy absorption for all the DP800 welds and the TRIP800 small welds: welds failed in pullout mode have statistically higher strength and energy absorption than those failed in interfacial fracture mode. For TRIP800 welds above the critical fusion zone level, the influence of weld failure modes on peak load and energy absorption diminishes. Scatter plots of peak load and energy absorption versus weld fusion zone size were then constructed, and the results indicate that fusion zone size is the most critical factor in weld quality in terms of peak load and energy absorption for both DP800 and TRIP800 spot welds.
Wayne Chuko; Jerry Gould
2002-07-08
This report describes work accomplished in the project, titled ''Development of Appropriate Resistance Spot Welding Practice for Transformation-Hardened Steels.'' The Phase 1 of the program involved development of in-situ temper diagrams for two gauges of representative dual-phase and martensitic grades of steels. The results showed that tempering is an effective way of reducing hold-time sensitivity (HTS) in hardenable high-strength sheet steels. In Phase 2, post-weld cooling rate techniques, incorporating tempering, were evaluated to reduce HTS for the same four steels. Three alternative methods, viz., post-heating, downsloping, and spike tempering, for HTS reduction were investigated. Downsloping was selected for detailed additional study, as it appeared to be the most promising of the cooling rate control methods. The downsloping maps for each of the candidate steels were used to locate the conditions necessary for the peak response. Three specific downslope conditions (at a fix ed final current for each material, timed for a zero-, medium-, and full-softening response) were chosen for further metallurgical and mechanical testing. Representative samples, were inspected metallographically, examining both local hardness variations and microstructures. The resulting downslope diagrams were found to consist largely of a C-curve. The softening observed in these curves, however, was not supported by subsequent metallography, which showed that all welds made, regardless of material and downslope condition, were essentially martensitic. CCT/TTT diagrams, generated based on microstructural modeling done at Oak Ridge National Laboratories, showed that minimum downslope times of 2 and 10 s for the martensitic and dual-phase grades of steels, respectively, were required to avoid martensite formation. These times, however, were beyond those examined in this study. These results show that downsloping is not an effective means of reducing HTS for production resistance spot welding (RSW). The necessary downslope times (2-10s) are prohibited by the welding rates currently used today (up to 60 welds/s). Based on the observations made in this study, spike tempering appears to be the best compromise of microstructural improvement and short cycle time. It is recommended that future work be focused on exploring the robustness of this approach, and its applicability for a wider range of steels.
Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.
2014-10-01
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amore » finding in stark contrast to DAC data.« less
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that there is a universal behavior, consistent with previous work in this field that RMI growth follows.
Size and habit evolution of PETN crystals - a lattice Monte Carlo study
Zepeda-Ruiz, L A; Maiti, A; Gee, R; Gilmer, G H; Weeks, B
2006-02-28
Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphology as a function of the rate of particle addition relative to diffusion.
A bottom collider vertex detector design, Monte-Carlo simulation and analysis package
Lebrun, P.
1990-10-01
A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the golden'' CP violating mode B{sub d} {yields} {pi}{sup +}{pi}{sup {minus}} is presented. These calculations have been done at FNAL energy ({radical}s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs.
Report on International Collaboration Involving the FE Heater and HG-A Tests at Mont Terri
Houseworth, Jim; Rutqvist, Jonny; Asahina, Daisuke; Chen, Fei; Vilarrasa, Victor; Liu, Hui-Hai; Birkholzer, Jens
2013-11-06
Nuclear waste programs outside of the US have focused on different host rock types for geological disposal of high-level radioactive waste. Several countries, including France, Switzerland, Belgium, and Japan are exploring the possibility of waste disposal in shale and other clay-rich rock that fall within the general classification of argillaceous rock. This rock type is also of interest for the US program because the US has extensive sedimentary basins containing large deposits of argillaceous rock. LBNL, as part of the DOE-NE Used Fuel Disposition Campaign, is collaborating on some of the underground research laboratory (URL) activities at the Mont Terri URL near Saint-Ursanne, Switzerland. The Mont Terri project, which began in 1995, has developed a URL at a depth of about 300 m in a stiff clay formation called the Opalinus Clay. Our current collaboration efforts include two test modeling activities for the FE heater test and the HG-A leak-off test. This report documents results concerning our current modeling of these field tests. The overall objectives of these activities include an improved understanding of and advanced relevant modeling capabilities for EDZ evolution in clay repositories and the associated coupled processes, and to develop a technical basis for the maximum allowable temperature for a clay repository. The R&D activities documented in this report are part of the work package of natural system evaluation and tool development that directly supports the following Used Fuel Disposition Campaign (UFDC) objectives: ? Develop a fundamental understanding of disposal-system performance in a range of environments for potential wastes that could arise from future nuclear-fuel-cycle alternatives through theory, simulation, testing, and experimentation. ? Develop a computational modeling capability for the performance of storage and disposal options for a range of fuel-cycle alternatives, evolving from generic models to more robust models of performance assessment. For the purpose of validating modeling capabilities for thermal-hydro-mechanical (THM) processes, we developed a suite of simulation models for the planned full-scale FE Experiment to be conducted in the Mont Terri URL, including a full three-dimensional model that will be used for direct comparison to experimental data once available. We performed for the first time a THM analysis involving the Barcelona Basic Model (BBM) in a full three-dimensional field setting for modeling the geomechanical behavior of the buffer material and its interaction with the argillaceous host rock. We have simulated a well defined benchmark that will be used for codeto- code verification against modeling results from other international modeling teams. The analysis highlights the complex coupled geomechanical behavior in the buffer and its interaction with the surrounding rock and the importance of a well characterized buffer material in terms of THM properties. A new geomechanical fracture-damage model, TOUGH-RBSN, was applied to investigate damage behavior in the ongoing HG-A test at Mont Terri URL. Two model modifications have been implemented so that the Rigid-Body-Spring-Network (RBSN) model can be used for analysis of fracturing around the HG-A microtunnel. These modifications are (1) a methodology to compute fracture generation under compressive stress conditions and (2) a method to represent anisotropic elastic and strength properties. The method for computing fracture generation under compressive load produces results that roughly follow trends expected for homogeneous and layered systems. Anisotropic properties for the bulk rock were represented in the RBSN model using layered heterogeneity and gave bulk material responses in line with expectations. These model improvements were implemented for an initial model of fracture damage at the HG-A test. While the HG-A test model results show some similarities with the test observations, differences between the model results and observations remain.
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
Clay, Raymond C.; Mcminis, Jeremy; McMahon, Jeffrey M.; Pierleoni, Carlo; Ceperley, David M.; Morales, Miguel A.
2014-05-01
The ab initio phase diagram of dense hydrogen is very sensitive to errors in the treatment of electronic correlation. Recently, it has been shown that the choice of the density functional has a large effect on the predicted location of both the liquid-liquid phase transition and the solid insulator-to-metal transition in dense hydrogen. To identify the most accurate functional for dense hydrogen applications, we systematically benchmark some of the most commonly used functionals using quantum Monte Carlo. By considering several measures of functional accuracy, we conclude that the van der Waals and hybrid functionals significantly outperform local density approximation and Perdew-Burke-Ernzerhof. We support these conclusions by analyzing the impact of functional choice on structural optimization in the molecular solid, and on the location of the liquid-liquid phase transition.
Silica separation from reinjection brines at Monte Amiata geothermal plants, Italy
Vitolo, S.; Cialdella, M.L. . Dipartimento di Ingegneria Chimica)
1994-06-01
A process for the separation of silica from geothermal reinjection brines is reported, in which the phases of coagulation, sedimentation and filtration of silica are involved. The effectiveness of lime and calcium chloride as coagulating agents has been investigated and the separating operations have been set out. Attention has been focused on Monte Amiata reinjection geothermal brines, whose scaling effect causes serious problems in the operation and maintenance of reinjection facilities. The study has been conducted using different amounts of added coagulants and at different temperatures, to determine optimal operating conditions. Though calcium chloride was revealed to be effective as a coagulant of the polymeric silica fraction, lime has also proved capable of removing monomeric dissolved silica at high dosages. Investigation on the behavior of coagulated brine has revealed the feasibility of separating the coagulated silica by sedimentation and filtration.
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; Plimpton, Steven J.
2015-08-14
The Richtmyer-Meshkov instability (RMI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Due to the inherent statistical noise and the significant computational requirements, DSMC is hardly ever applied to hydrodynamic flows. Here, DSMC RMI simulations are performed to quantify the shock-driven growth of a single-mode perturbation on the interface between two atmospheric-pressure monatomic gases prior to re-shocking as a function of the Atwood and Mach numbers. The DSMC results qualitatively reproduce all features of the RMI and are in reasonable quantitative agreement with existing theoretical and empirical models. The DSMC simulations indicate that theremore » is a universal behavior, consistent with previous work in this field that RMI growth follows.« less
Ab initio molecular dynamics simulation of liquid water by quantum Monte Carlo
Zen, Andrea; Luo, Ye Mazzola, Guglielmo Sorella, Sandro; Guidoni, Leonardo
2015-04-14
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article, we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous density functional theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab initio simulations of complex chemical systems.
penORNL: a parallel monte carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Density-functional Monte-Carlo simulation of CuZn order-disorder transition
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khan, Suffian N.; Eisenbach, Markus
2016-01-25
We perform a Wang-Landau Monte Carlo simulation of a Cu0.5Zn0.5 order-disorder transition using 250 atoms and pairwise atom swaps inside a 5 x 5 x 5 BCC supercell. Each time step uses energies calculated from density functional theory (DFT) via the all-electron Korringa-Kohn- Rostoker method and self-consistent potentials. Here we find CuZn undergoes a transition from a disordered A2 to an ordered B2 structure, as observed in experiment. Our calculated transition temperature is near 870 K, comparing favorably to the known experimental peak at 750 K. We also plot the entropy, temperature, specific-heat, and short-range order as a function ofmore » internal energy.« less
Billion-atom synchronous parallel kinetic Monte Carlo simulations of critical 3D Ising systems
Martinez, E.; Monasterio, P.R.; Marian, J.
2011-02-20
An extension of the synchronous parallel kinetic Monte Carlo (spkMC) algorithm developed by Martinez et al. [J. Comp. Phys. 227 (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors' time clocks current in a global sense. Boundary conflicts are resolved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of serial calculations, which confirms the statistical validity of our algorithm. We have analyzed the parallel efficiency of spkMC and find that it scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.
Iterative Monte Carlo analysis of spin-dependent parton distributions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; Ethier, Jacob J.; Accardi, Alberto
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d2 moment of the nucleon within a global PDF analysis.« less
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
Energy Science and Technology Software Center (OSTI)
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Velizhanin, Kirill A.; Saxena, Avadh
2015-11-11
The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835846). Phase space data generated for Varian linac photon beams (6 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-coupled- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Sunny, E. E.; Martin, W. R. [University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor MI 48109 (United States)
2013-07-01
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
SciThur AM: YIS - 04: Gold Nanoparticle Enhanced Arc Radiotherapy: A Monte Carlo Feasibility Study
Koger, B; Kirkby, C
2014-08-15
Introduction: The use of gold nanoparticles (GNPs) in radiotherapy has shown promise for therapeutic enhancement. In this study, we explore the feasibility of enhancing radiotherapy with GNPs in an arc-therapy context. We use Monte Carlo simulations to quantify the macroscopic dose-enhancement ratio (DER) and tumour to normal tissue ratio (TNTR) as functions of photon energy over various tumour and body geometries. Methods: GNP-enhanced arc radiotherapy (GEART) was simulated using the PENELOPE Monte Carlo code and penEasy main program. We simulated 360 arc-therapy with monoenergetic photon energies 50 1000 keV and several clinical spectra used to treat a spherical tumour containing uniformly distributed GNPs in a cylindrical tissue phantom. Various geometries were used to simulate different tumour sizes and depths. Voxel dose was used to calculate DERs and TNTRs. Inhomogeneity effects were examined through skull dose in brain tumour treatment simulations. Results: Below 100 keV, DERs greater than 2.0 were observed. Compared to 6 MV, tumour dose at low energies was more conformai, with lower normal tissue dose and higher TNTRs. Both the DER and TNTR increased with increasing cylinder radius and decreasing tumour radius. The inclusion of bone showed excellent tumour conformality at low energies, though with an increase in skull dose (40% of tumour dose with 100 keV compared to 25% with 6 MV). Conclusions: Even in the presence of inhomogeneities, our results show promise for the treatment of deep-seated tumours with low-energy GEART, with greater tumour dose conformality and lower normal tissue dose than 6 MV.
SU-E-T-277: Raystation Electron Monte Carlo Commissioning and Clinical Implementation
Allen, C; Sansourekidou, P; Pavord, D
2014-06-01
Purpose: To evaluate the Raystation v4.0 Electron Monte Carlo algorithm for an Elekta Infinity linear accelerator and commission for clinical use. Methods: A total of 199 tests were performed (75 Export and Documentation, 20 PDD, 30 Profiles, 4 Obliquity, 10 Inhomogeneity, 55 MU Accuracy, and 5 Grid and Particle History). Export and documentation tests were performed with respect to MOSAIQ (Elekta AB) and RadCalc (Lifeline Software Inc). Mechanical jaw parameters and cutout magnifications were verified. PDD and profiles for open cones and cutouts were extracted and compared with water tank measurements. Obliquity and inhomogeneity for bone and air calculations were compared to film dosimetry. MU calculations for open cones and cutouts were performed and compared to both RadCalc and simple hand calculations. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Acceptability was categorized as follows: performs as expected, negligible impact on workflow, marginal impact, critical impact or safety concern, and catastrophic impact of safety concern. Results: Overall results are: 88.8% perform as expected, 10.2% negligible, 2.0% marginal, 0% critical and 0% catastrophic. Results per test category are as follows: Export and Documentation: 100% perform as expected, PDD: 100% perform as expected, Profiles: 66.7% perform as expected, 33.3% negligible, Obliquity: 100% marginal, Inhomogeneity 50% perform as expected, 50% negligible, MU Accuracy: 100% perform as expected, Grid and particle histories: 100% negligible. To achieve distributions with satisfactory smoothness level, 5,000,000 particle histories were used. Calculation time was approximately 1 hour. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use. All of the issues encountered have acceptable workarounds. Known issues were reported to Raysearch and will be resolved in upcoming releases.
Neutrinos from WIMP annihilations obtained using a full three-flavor Monte Carlo approach
Blennow, Mattias; Ohlsson, Tommy; Edsjoe, Joakim E-mail: edsjo@physto.se
2008-01-15
Weakly interacting massive particles (WIMPs) are one of the main candidates for making up the dark matter in the Universe. If these particles make up the dark matter, then they can be captured by the Sun or the Earth, sink to the respective cores, annihilate, and produce neutrinos. Thus, these neutrinos can be a striking dark matter signature at neutrino telescopes looking towards the Sun and/or the Earth. Here, we improve previous analyses on computing the neutrino yields from WIMP annihilations in several respects. We include neutrino oscillations in a full three-flavor framework as well as all effects from neutrino interactions on the way through the Sun (absorption, energy loss, and regeneration from tau decays). In addition, we study the effects of non-zero values of the mixing angle {theta}{sub 13} as well as the normal and inverted neutrino mass hierarchies. Our study is performed in an event-based setting which makes these results very useful both for theoretical analyses and for building a neutrino telescope Monte Carlo code. All our results for the neutrino yields, as well as our Monte Carlo code, are publicly available. We find that the yield of muon-type neutrinos from WIMP annihilations in the Sun is enhanced or suppressed, depending on the dominant WIMP annihilation channel. This effect is due to an effective flavor mixing caused by neutrino oscillations. For WIMP annihilations inside the Earth, the distance from source to detector is too small to allow for any significant amount of oscillations at the neutrino energies relevant for neutrino telescopes.
Cluster expansion modeling and Monte Carlo simulation of alnico 57 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 57. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 57 at atomistic and nano scales. The alnico 57 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on ?-site and Ni and Co on ?-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 57 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Bao, Chen; Wu, Hongfei; Li, Li; Newcomer, Darrell R.; Long, Philip E.; Williams, Kenneth H.
2014-09-02
We aim to understand the scale-dependent evolution of uranium bioreduction during a field experiment at a former uranium mill site near Rifle, Colorado. Acetate was injected to stimulate Fe-reducing bacteria (FeRB) and to immobilize aqueous U(VI) to insoluble U(IV). Bicarbonate was coinjected in half of the domain to mobilize sorbed U(VI). We used reactive transport modeling to integrate hydraulic and geochemical data and to quantify rates at the grid block (0.25 m) and experimental field scale (tens of meters). Although local rates varied by orders of magnitude in conjunction with biostimulation fronts propagating downstream, field-scale rates were dominated by those orders of magnitude higher rates at a few selected hot spots where Fe(III), U(VI), and FeRB were at their maxima in the vicinity of the injection wells. At particular locations, the hot moments with maximum rates negatively corresponded to their distance from the injection wells. Although bicarbonate injection enhanced local rates near the injection wells by a maximum of 39.4%, its effect at the field scale was limited to a maximum of 10.0%. We propose a rate-versus-measurement-length relationship (log R' = -0.63
Umegaki, K; Matsuura, T.; Takao, S.; Nihongi, H.; Yamada, T.; Miyamoto, N.; Shimizu, S.; Shirato, H.; Matsuda, K.; Nakamura, F.; Umezawa, M.; Hiramoto, K.
2014-06-01
Purpose: A novel Proton Beam Therapy system has been developed by integrating Real-Time Tumor-Tracking (RTRT) and discrete spot scanning techniques. The system dedicated for spot scanning delivers significant advantages for both clinical and economical points of view. The system has the ability to control dose distribution with spot scanning beams and to gate the beams from the synchrotron to irradiate moving tumors only when the actual positions of them are within the planned position. Methods: The newly designed system consists of a synchrotron, beam transport systems, a compact and rotating gantry system with robotic couch and two orthogonal sets of X-ray fluoroscopes. The fully compact design of the system has been realized by reducing the maximum energy of the beam to 220MeV, corresponding to 30g/cm2 range and the number of circulating protons per synchrotron operation cycle, due to higher beam utilization efficiency in spot scanning. To improve the irradiation efficiency in the integration of RTRT and spot scanning, a new control system has been developed to enable multiple gated irradiation per operation cycle according to the gating signals. After the completion of the equipment installation, beam tests and commissioning has been successfully performed. Results: The basic performances and beam characteristics through the synchrotron accelerator to iso-center have been confirmed and the performance test of the irradiation nozzle and whole system has been appropriately completed. CBCT image has been checked and sufficient quality was obtained. RTRT system has been demonstrated and realized accurate dose distributions for moving targets. Conclusion: The gated spot scanning Proton Beam Therapy system with Real-Time Tumor-Tracking has been developed, successfully installed and tested. The new system enables us to deliver higher dose to the moving target tumors while sparing surrounding normal tissues and to realize the compact design of the system and facility by maximizing the efficiency of proton beam utilization. This research is granted by the Japan Society for the Promotion of Science(JSPS) through the Funding Program for World-Leading Innovative R and D on Science and Technology(FIRST Program), initiated by the Council for Science and Technology Policy(CSTP)
Camden, Jon P
2013-07-16
A major component of this proposal is to elucidate the connection between optical and electron excitation of plasmon modes in metallic nanostructures. These accomplishments are reported: developed a routine protocol for obtaining spatially resolved, low energy EELS spectra, and resonance Rayleigh scattering spectra from the same nanostructures.; correlated optical scattering spectra and plasmon maps obtained using STEM/EELS.; and imaged electromagnetic hot spots responsible for single-molecule surface-enhanced Raman scattering (SMSERS).
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2011-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
MO-G-BRF-09: Investigating Magnetic Field Dose Effects in Mice: A Monte Carlo Study
Rubinstein, A; Guindani, M; Followill, D; Melancon, A; Hazle, J; Court, L
2014-06-15
Purpose: In MRI-linac treatments, radiation dose distributions are affected by magnetic fields, especially at high-density/low-density interfaces. Radiobiological consequences of magnetic field dose effects are presently unknown; therefore, preclinical studies are needed to ensure the safe clinical use of MRI-linacs. This study investigates the optimal combination of beam energy and magnetic field strength needed for preclinical murine studies. Methods: The Monte Carlo code MCNP6 was used to simulate the effects of a magnetic field when irradiating a mouse-sized lung phantom with a 1.0cmx1.0cm photon beam. Magnetic field effects were examined using various beam energies (225kVp, 662keV[Cs-137], and 1.25MeV[Co-60]) and magnetic field strengths (0.75T, 1.5T, and 3T). The resulting dose distributions were compared to Monte Carlo results for humans with various field sizes and patient geometries using a 6MV/1.5T MRI-linac. Results: In human simulations, the addition of a 1.5T magnetic field caused an average dose increase of 49% (range:36%60%) to lung at the soft tissue-to-lung interface and an average dose decrease of 30% (range:25%36%) at the lung-to-soft tissue interface. In mouse simulations, the magnetic fields had no effect on the 225kVp dose distribution. The dose increases for the Cs-137 beam were 12%, 33%, and 49% for 0.75T, 1.5T, and 3.0T magnetic fields, respectively while the dose decreases were 7%, 23%, and 33%. For the Co-60 beam, the dose increases were 14%, 45%, and 41%, and the dose decreases were 18%, 35%, and 35%. Conclusion: The magnetic field dose effects observed in mouse phantoms using a Co-60 beam with 1.5T or 3T fields and a Cs-137 beam with a 3T field compare well with those seen in simulated human treatments with an MRI-linac. These irradiator/magnet combinations are suitable for preclinical studies investigating potential biological effects of delivering radiation therapy in the presence of a magnetic field. Partially funded by Elekta.
Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine
Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.
2013-02-15
Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear accelerators. Conclusions: It is concluded that dosimetric parameters and the leakage radiation of the optimized secondary MLC design are well below their recommended tolerance values. The optimized design of the proposed MLC can be integrated into a telecobalt machine by replacing the existing adjustable secondary collimator for conformal radiotherapy treatment of cancer patients.
Pugh, Thomas J. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Munsell, Mark F. [Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Choi, Seungtaek; Nguyen, Quyhn Nhu; Mathai, Benson [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhu, X. Ron; Sahoo, Narayan; Gillin, Michael; Johnson, Jennifer L.; Amos, Richard A. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Mahmood, Usama; Kuban, Deborah A.; Frank, Steven J.; Hoffman, Karen E.; McGuire, Sean E. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Lee, Andrew K., E-mail: aklee@mdanderson.org [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)
2013-12-01
Purpose: To report quality of life (QOL)/toxicity in men treated with proton beam therapy for localized prostate cancer and to compare outcomes between passively scattered proton therapy (PSPT) and spot-scanning proton therapy (SSPT). Methods and Materials: Men with localized prostate cancer enrolled on a prospective QOL protocol with a minimum of 2 years' follow-up were reviewed. Comparative groups were defined by technique (PSPT vs SSPT). Patients completed Expanded Prostate Cancer Index Composite questionnaires at baseline and every 3-6 months after proton beam therapy. Clinically meaningful differences in QOL were defined as ?0.5 baseline standard deviation. The cumulative incidence of modified Radiation Therapy Oncology Group grade ?2 gastrointestinal (GI) or genitourinary (GU) toxicity and argon plasma coagulation were determined by the Kaplan-Meier method. Results: A total of 226 men received PSPT, and 65 received SSPT. Both PSPT and SSPT resulted in statistically significant changes in sexual, urinary, and bowel Expanded Prostate Cancer Index Composite summary scores. Only bowel summary, function, and bother resulted in clinically meaningful decrements beyond treatment completion. The decrement in bowel QOL persisted through 24-month follow-up. Cumulative grade ?2 GU and GI toxicity at 24 months were 13.4% and 9.6%, respectively. There was 1 grade 3 GI toxicity (PSPT group) and no other grade ?3 GI or GU toxicity. Argon plasma coagulation application was infrequent (PSPT 4.4% vs SSPT 1.5%; P=.21). No statistically significant differences were appreciated between PSPT and SSPT regarding toxicity or QOL. Conclusion: Both PSPT and SSPT confer low rates of grade ?2 GI or GU toxicity, with preservation of meaningful sexual and urinary QOL at 24 months. A modest, yet clinically meaningful, decrement in bowel QOL was seen throughout follow-up. No toxicity or QOL differences between PSPT and SSPT were identified. Long-term comparative results in a larger patient cohort are warranted.
Energy Science and Technology Software Center (OSTI)
2013-05-06
Set of scripts (Python and Bash) to help users configure, run, and benchmark Hadoop clusters on ORNL computing infrastructure.
Monte Carlo modeling of neutron and gamma-ray imaging systems
Hall, J.
1996-04-01
Detailed numerical prototypes are essential to design of efficient and cost-effective neutron and gamma-ray imaging systems. We have exploited the unique capabilities of an LLNL-developed radiation transport code (COG) to develop code modules capable of simulating the performance of neutron and gamma-ray imaging systems over a wide range of source energies. COG allows us to simulate complex, energy-, angle-, and time-dependent radiation sources, model 3-dimensional system geometries with ``real world`` complexity, specify detailed elemental and isotopic distributions and predict the responses of various types of imaging detectors with full Monte Carlo accuray. COG references detailed, evaluated nuclear interaction databases allowingusers to account for multiple scattering, energy straggling, and secondary particle production phenomena which may significantly effect the performance of an imaging system by may be difficult or even impossible to estimate using simple analytical models. This work presents examples illustrating the use of these routines in the analysis of industrial radiographic systems for thick target inspection, nonintrusive luggage and cargoscanning systems, and international treaty verification.
Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams
Vandervoort, Eric J. Cygler, Joanna E.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5; Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 ; Tchistiakova, Ekaterina; Department of Medical Biophysics, University of Toronto, Ontario M5G 2M9; Heart and Stroke Foundation Centre for Stroke Recovery, Sunnybrook Research Institute, University of Toronto, Ontario M4N 3M5 ; La Russa, Daniel J.; The Faculty of Medicine, The University of Ottawa, Ottawa, Ontario K1H 8M5
2014-02-15
Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm ?-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.
Cascade annealing simulations of bcc iron using object kinetic Monte Carlo
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
Simulations of displacement cascade annealing were carried out using object kinetic Monte Carlo based on an extensive MD database including various primary knock-on atom energies and directions. The sensitivity of the results to a broad range of material and model parameters was examined. The diffusion mechanism of interstitial clusters has been identified to have the most significant impact on the fraction of stable interstitials that escape the cascade region. The maximum level of recombination was observed for the limiting case in which all interstitial clusters exhibit 3D random walk diffusion. The OKMC model was parameterized using two alternative sets of defect migration and binding energies, one from ab initio calculations and the second from an empirical potential. The two sets of data predict essentially the same fraction of surviving defects but different times associated with the defect escape processes. This study provides a comprehensive picture of the first phase of long-term defect evolution in bcc iron and generates information that can be used as input data for mean field rate theory (MFRT) to predict the microstructure evolution of materials under irradiation. In addition, the limitations of the current OKMC model are discussed and a potential way to overcome these limitations is outlined.
Collapse transitions in thermosensitive multi-block copolymers: A Monte Carlo study
Rissanou, Anastassia N.; Tzeli, Despoina S.; Anastasiadis, Spiros H.; Bitsanis, Ioannis A.
2014-05-28
Monte Carlo simulations are performed on a simple cubic lattice to investigate the behavior of a single linear multiblock copolymer chain of various lengths N. The chain of type (A{sub n}B{sub n}){sub m} consists of alternating A and B blocks, where A are solvophilic and B are solvophobic and N = 2nm. The conformations are classified in five cases of globule formation by the solvophobic blocks of the chain. The dependence of globule characteristics on the molecular weight and on the number of blocks, which participate in their formation, is examined. The focus is on relative high molecular weight blocks (i.e., N in the range of 5005000 units) and very differing energetic conditions for the two blocks (very goodalmost athermal solvent for A and bad solvent for B). A rich phase behavior is observed as a result of the alternating architecture of the multiblock copolymer chain. We trust that thermodynamic equilibrium has been reached for chains of N up to 2000 units; however, for longer chains kinetic entrapments are observed. The comparison among equivalent globules consisting of different number of B-blocks shows that the more the solvophobic blocks constituting the globule the bigger its radius of gyration and the looser its structure. Comparisons between globules formed by the solvophobic blocks of the multiblock copolymer chain and their homopolymer analogs highlight the important role of the solvophilic A-blocks.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
The hydrophobic effect in a simple isotropic water-like model: Monte Carlo study
Huš, Matej; Urbic, Tomaz
2014-04-14
Using Monte Carlo computer simulations, we show that a simple isotropic water-like model with two characteristic lengths can reproduce the hydrophobic effect and the solvation properties of small and large non-polar solutes. Influence of temperature, pressure, and solute size on the thermodynamic properties of apolar solute solvation in a water model was systematically studied, showing two different solvation regimes. Small particles can fit into the cavities around the solvent particles, inducing additional order in the system and lowering the overall entropy. Large particles force the solvent to disrupt their network, increasing the entropy of the system. At low temperatures, the ordering effect of small solutes is very pronounced. Above the cross-over temperature, which strongly depends on the solute size, the entropy change becomes strictly positive. Pressure dependence was also investigated, showing a “cross-over pressure” where the entropy and enthalpy of solvation are the lowest. These results suggest two fundamentally different solvation mechanisms, as observed experimentally in water and computationally in various water-like models.
Uribe, R. M.; Salvat, F.; Cleland, M. R.; Berejka, A.
2009-03-10
The Monte Carlo code PENELOPE was used to simulate the irradiation of alanine coated film dosimeters with electron beams of energies from 1 to 5 MeV being produced by a high-current industrial electron accelerator. This code includes a geometry package that defines complex quadratic geometries, such as those of the irradiation of products in an irradiation processing facility. In the present case the energy deposited on a water film at the surface of a wood parallelepiped was calculated using the program PENMAIN, which is a generic main program included in the PENELOPE distribution package. The results from the simulation were then compared with measurements performed by irradiating alanine film dosimeters with electrons using a 150 kW Dynamitron electron accelerator. The alanine films were placed on top of a set of wooden planks using the same geometrical arrangement as the one used for the simulation. The way the results from the simulation can be correlated with the actual measurements, taking into account the irradiation parameters, is described. An estimation of the percentage difference between measurements and calculations is also presented.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Calculation of complete fusion cross sections of heavy ion reactions using the Monte Carlo method
Ghodsi, O. N.; Mahmoodi, M.; Ariai, J.
2007-03-15
The nucleus-nucleus potential for the fusion reactions {sup 40}Ca+{sup 48}Ca, {sup 16}O+{sup 208}Pb, and {sup 48}Ca+{sup 48}Ca has been calculated using the Monte Carlo method. The results obtained indicate that the technique employed for the calculation of the nucleus-nucleus potential is an efficient one. The effects of the spin and the isospin terms have also been studied using the same technique. The analysis of the results obtained for the {sup 48}Ca+{sup 48}Ca reaction reveal that the isospin-dependent term in the nucleon-nucleon potential causes the nuclear potential to drop by an amount of 0.5 MeV. The analytical calculations of the fusion cross section, particularly those at energies less than the fusion barrier, are in good agreement with the experimental data. In these calculations the effective nucleon-nucleon potential chosen is of the M3Y-Paris potential form and no adjustable parameter has been used.
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)_{n} organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochastic methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) Blue Room facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
Electrolyte pore/solution partitioning by expanded grand canonical ensemble Monte Carlo simulation
Moucka, Filip; Bratko, Dusan Luzar, Alenka
2015-03-28
Using a newly developed grand canonical Monte Carlo approach based on fractional exchanges of dissolved ions and water molecules, we studied equilibrium partitioning of both components between laterally extended apolar confinements and surrounding electrolyte solution. Accurate calculations of the Hamiltonian and tensorial pressure components at anisotropic conditions in the pore required the development of a novel algorithm for a self-consistent correction of nonelectrostatic cut-off effects. At pore widths above the kinetic threshold to capillary evaporation, the molality of the salt inside the confinement grows in parallel with that of the bulk phase, but presents a nonuniform width-dependence, being depleted at some and elevated at other separations. The presence of the salt enhances the layered structure in the slit and lengthens the range of inter-wall pressure exerted by the metastable liquid. Solvation pressure becomes increasingly repulsive with growing salt molality in the surrounding bath. Depending on the sign of the excess molality in the pore, the wetting free energy of pore walls is either increased or decreased by the presence of the salt. Because of simultaneous rise in the solution surface tension, which increases the free-energy cost of vapor nucleation, the rise in the apparent hydrophobicity of the walls has not been shown to enhance the volatility of the metastable liquid in the pores.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Boscoboinik, A. M.; Manzi, S. J.; Tysoe, W. T.; Pereyra, V. D.; Boscoboinik, J. A.
2015-09-10
The influence of directing agents in the self-assembly of molecular wires to produce two-dimensional electronic nanoarchitectures is studied here using a Monte Carlo approach to simulate the effect of arbitrarily locating nodal points on a surface, from which the growth of self-assembled molecular wires can be nucleated. This is compared to experimental results reported for the self-assembly of molecular wires when 1,4-phenylenediisocyanide (PDI) is adsorbed on Au(111). The latter results in the formation of (Au-PDI)n organometallic chains, which were shown to be conductive when linked between gold nanoparticles on an insulating substrate. The present study analyzes, by means of stochasticmore » methods, the influence of variables that affect the growth and design of self-assembled conductive nanoarchitectures, such as the distance between nodes, coverage of the monomeric units that leads to the formation of the desired architectures, and the interaction between the monomeric units. As a result, this study proposes an approach and sets the stage for the production of complex 2D nanoarchitectures using a bottom-up strategy but including the use of current state-of-the-art top-down technology as an integral part of the self-assembly strategy.« less
Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.; Gomez, Maria A.
2015-04-21
A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that this secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.
von Wittenau, A; Aufderheide, M B; Henderson, G L
2010-05-07
Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.
Da, B.; Li, Z. Y.; Chang, H. C.; Ding, Z. J.; Mao, S. F.
2014-09-28
It has been experimentally found that the carbon surface contamination influences strongly the spectrum signals in reflection electron energy loss spectroscopy (REELS) especially at low primary electron energy. However, there is still little theoretical work dealing with the carbon contamination effect in REELS. Such a work is required to predict REELS spectrum for layered structural sample, providing an understanding of the experimental phenomena observed. In this study, we present a numerical calculation result on the spatially varying differential inelastic mean free path for a sample made of a carbon contamination layer of varied thickness on a SrTiO{sub 3} substrate. A Monte Carlo simulation model for electron interaction with a layered structural sample is built by combining this inelastic scattering cross-section with the Mott's cross-section for electron elastic scattering. The simulation results have clearly shown that the contribution of the electron energy loss from carbon surface contamination increases with decreasing primary energy due to increased individual scattering processes along trajectory parts carbon contamination layer. Comparison of the simulated spectra for different thicknesses of the carbon contamination layer and for different primary electron energies with experimental spectra clearly identifies that the carbon contamination in the measured sample was in the form of discontinuous islands other than the uniform film.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; Morales, Maguel A.
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
A Monte Carlo Analysis of Gas Centrifuge Enrichment Plant Process Load Cell Data
Garner, James R; Whitaker, J Michael
2013-01-01
As uranium enrichment plants increase in number, capacity, and types of separative technology deployed (e.g., gas centrifuge, laser, etc.), more automated safeguards measures are needed to enable the IAEA to maintain safeguards effectiveness in a fiscally constrained environment. Monitoring load cell data can significantly increase the IAEA s ability to efficiently achieve the fundamental safeguards objective of confirming operations as declared (i.e., no undeclared activities), but care must be taken to fully protect the operator s proprietary and classified information related to operations. Staff at ORNL, LANL, JRC/ISPRA, and University of Glasgow are investigating monitoring the process load cells at feed and withdrawal (F/W) stations to improve international safeguards at enrichment plants. A key question that must be resolved is what is the necessary frequency of recording data from the process F/W stations? Several studies have analyzed data collected at a fixed frequency. This paper contributes to load cell process monitoring research by presenting an analysis of Monte Carlo simulations to determine the expected errors caused by low frequency sampling and its impact on material balance calculations.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.
2013-07-01
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate the dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC): Fundamentals and Applications
Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E
2012-01-01
The fundamentals of the framework and the details of each component of the self-evolving atomistic kinetic Monte Carlo (SEAKMC) are presented. The strength of this new technique is the ability to simulate dynamic processes with atomistic fidelity that is comparable to molecular dynamics (MD) but on a much longer time scale. The observation that the dimer method preferentially finds the saddle point (SP) with the lowest energy is investigated and found to be true only for defects with high symmetry. In order to estimate the fidelity of dynamics and accuracy of the simulation time, a general criterion is proposed and applied to two representative problems. Applications of SEAKMC for investigating the diffusion of interstitials and vacancies in bcc iron are presented and compared directly with MD simulations, demonstrating that SEAKMC provides results that formerly could be obtained only through MD. The correlation factor for interstitial diffusion in the dumbbell configuration, which is extremely difficult to obtain using MD, is predicted using SEAKMC. The limitations of SEAKMC are also discussed. The paper presents a comprehensive picture of the SEAKMC method in both its unique predictive capabilities and technically important details.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.
2014-09-05
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.
Mller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method Preprint Michael Kuss, Tony Markel, and William Kramer Presented at the 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition Shenzhen, China November 5 - 9, 2010 Conference Paper NREL/CP-5400-48827 January 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang
2008-04-15
This article presents a brachytherapy source having {sup 103}Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model {sup 103}Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA-{sup 103}Pd source in water was found to be 0.678 cGy h{sup -1} U{sup -1} with an approximate uncertainty of {+-}0.1%. The anisotropy function, F(r,{theta}), and the radial dose function, g(r), of the IRA-{sup 103}Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms.
atl?, Serap; Tan?r, Gne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine
Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L
2014-06-01
Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Wu, May; Zhang, Zhonglong
2015-09-01
Using the Soil and Water Assessment Tool (SWAT) for large-scale watershed modeling could be useful for evaluating the quality of the water in regions that are dominated by nonpoint sources in order to identify potential “hot spots” for which mitigating strategies could be further developed. An analysis of water quality under future scenarios in which changes in land use would be made to accommodate increased biofuel production was developed for the Missouri River Basin (MoRB) based on a SWAT model application. The analysis covered major agricultural crops and biofuel feedstock in the MoRB, including pasture land, hay, corn, soybeans, wheat, and switchgrass. The analysis examined, at multiple temporal and spatial scales, how nitrate, organic nitrogen, and total nitrogen; phosphorus, organic phosphorus, inorganic phosphorus, and total phosphorus; suspended sediments; and water flow (water yield) would respond to the shifts in land use that would occur under proposed future scenarios. The analysis was conducted at three geospatial scales: (1) large tributary basin scale (two: Upper MoRB and Lower MoRB); (2) regional watershed scale (seven: Upper Missouri River, Middle Missouri River, Middle Lower Missouri River, Lower Missouri River, Yellowstone River, Platte River, and Kansas River); and (3) eight-digit hydrologic unit (HUC-8) subbasin scale (307 subbasins). Results showed that subbasin-level variations were substantial. Nitrogen loadings decreased across the entire Upper MoRB, and they increased in several subbasins in the Lower MoRB. Most nitrate reductions occurred in lateral flow. Also at the subbasin level, phosphorus in organic, sediment, and soluble forms was reduced by 35%, 45%, and 65%, respectively. Suspended sediments increased in 68% of the subbasins. The water yield decreased in 62% of the subbasins. In the Kansas River watershed, the water quality improved significantly with regard to every nitrogen and phosphorus compound. The improvement was clearly attributable to the conversion of a large amount of land to switchgrass. The Middle Lower Missouri River and Lower Missouri River were identified as hot regions. Further analysis identified four subbasins (10240002, 10230007, 10290402, and 10300200) as being the most vulnerable in terms of sediment, nitrogen, and phosphorus loadings. Overall, results suggest that increasing the amount of switchgrass acreage in the hot spots should be considered to mitigate the nutrient loads. The study provides an analytical method to support stakeholders in making informed decisions that balance biofuel production and water sustainability.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lssl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Silva-Rodrguez, Jess Aguiar, Pablo; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela , 15782, Galicia; Grupo de Imaxe Molecular, Instituto de Investigacin Sanitarias , Santiago de Compostela, 15706, Galicia ; Snchez, Manuel; Mosquera, Javier; Luna-Vega, Vctor; Corts, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, lvaro; Grupo de Imaxe Molecular, Instituto de Investigacin Sanitarias , Santiago de Compostela, 15706, Galicia; Fundacin Tejerina, 28003, Madrid
2014-05-15
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Minibeam radiation therapy for the management of osteosarcomas: A Monte Carlo study
Martnez-Rovira, I.; Prezado, Y.
2014-06-15
Purpose: Minibeam radiation therapy (MBRT) exploits the well-established tissue-sparing effect provided by the combination of submillimetric field sizes and a spatial fractionation of the dose. The aim of this work is to evaluate the feasibility and potential therapeutic gain of MBRT, in comparison with conventional radiotherapy, for osteosarcoma treatments. Methods: Monte Carlo simulations (PENELOPE/PENEASY code) were used as a method to study the dose distributions resulting from MBRT irradiations of a rat femur and a realistic human femur phantoms. As a figure of merit, peak and valley doses and peak-to-valley dose ratios (PVDR) were assessed. Conversion of absorbed dose to normalized total dose (NTD) was performed in the human case. Several field sizes and irradiation geometries were evaluated. Results: It is feasible to deliver a uniform dose distribution in the target while the healthy tissue benefits from a spatial fractionation of the dose. Very high PVDR values (?20) were achieved in the entrance beam path in the rat case. PVDR values ranged from 2 to 9 in the human phantom. NTD{sub 2.0} of 87 Gy might be reached in the tumor in the human femur while the healthy tissues might receive valley NTD{sub 2.0} lower than 20 Gy. The doses in the tumor and healthy tissues might be significantly higher and lower than the ones commonly delivered used in conventional radiotherapy. Conclusions: The obtained dose distributions indicate that a gain in normal tissue sparing might be expected. This would allow the use of higher (and potentially curative) doses in the tumor. Biological experiments are warranted.
SU-E-T-323: The FLUKA Monte Carlo Code in Ion Beam Therapy
Rinaldi, I
2014-06-01
Purpose: Monte Carlo (MC) codes are increasingly used in the ion beam therapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code demands accurate and reliable physical models for the transport and the interaction of all components of the mixed radiation field. This contribution will address an overview of the recent developments in the FLUKA code oriented to its application in ion beam therapy. Methods: FLUKA is a general purpose MC code which allows the calculations of particle transport and interactions with matter, covering an extended range of applications. The user can manage the code through a graphic interface (FLAIR) developed using the Python programming language. Results: This contribution will present recent refinements in the description of the ionization processes and comparisons between FLUKA results and experimental data of ion beam therapy facilities. Moreover, several validations of the largely improved FLUKA nuclear models for imaging application to treatment monitoring will be shown. The complex calculation of prompt gamma ray emission compares favorably with experimental data and can be considered adequate for the intended applications. New features in the modeling of proton induced nuclear interactions also provide reliable cross section predictions for the production of radionuclides. Of great interest for the community are the developments introduced in FLAIR. The most recent efforts concern the capability of importing computed-tomography images in order to build automatically patient geometries and the implementation of different types of existing positron-emission-tomography scanner devices for imaging applications. Conclusion: The FLUA code has been already chosen as reference MC code in many ion beam therapy centers, and is being continuously improved in order to match the needs of ion beam therapy applications. Parts of this work have been supported by the European FP7 project ENVISION (grant agreement no. 241851)
BENCHMARK TESTS FOR MARKOV CHAIN MONTE CARLO FITTING OF EXOPLANET ECLIPSE OBSERVATIONS
Rogers, Justin; Lopez-Morales, Mercedes; Apai, Daniel; Adams, Elisabeth
2013-04-10
Ground-based observations of exoplanet eclipses provide important clues to the planets' atmospheric physics, yet systematics in light curve analyses are not fully understood. It is unknown if measurements suggesting near-infrared flux densities brighter than models predict are real, or artifacts of the analysis processes. We created a large suite of model light curves, using both synthetic and real noise, and tested the common process of light curve modeling and parameter optimization with a Markov Chain Monte Carlo algorithm. With synthetic white noise models, we find that input eclipse signals are generally recovered within 10% accuracy for eclipse depths greater than the noise amplitude, and to smaller depths for higher sampling rates and longer baselines. Red noise models see greater discrepancies between input and measured eclipse signals, often biased in one direction. Finally, we find that in real data, systematic biases result even with a complex model to account for trends, and significant false eclipse signals may appear in a non-Gaussian distribution. To quantify the bias and validate an eclipse measurement, we compare both the planet-hosting star and several of its neighbors to a separately chosen control sample of field stars. Re-examining the Rogers et al. Ks-band measurement of CoRoT-1b finds an eclipse 3190{sup +370}{sub -440} ppm deep centered at {phi}{sub me} = 0.50418{sup +0.00197}{sub -0.00203}. Finally, we provide and recommend the use of selected data sets we generated as a benchmark test for eclipse modeling and analysis routines, and propose criteria to verify eclipse detections.
Structural Stability and Defect Energetics of ZnO from Diffusion Quantum Monte Carlo
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Santana Palacio, Juan A; Krogel, Jaron T; Kim, Jeongnim; Kent, Paul R; Reboredo, Fernando A
2015-01-01
We have applied the many-body ab-initio diffusion quantum Monte Carlo (DMC) method to study Zn and ZnO crystals under pressure, and the energetics of the oxygen vacancy, zinc interstitial and hydrogen impurities in ZnO. We show that DMC is an accurate and practical method that can be used to characterize multiple properties of materials that are challenging for density functional theory approximations. DMC agrees with experimental measurements to within 0.3 eV, including the band-gap of ZnO, the ionization potential of O and Zn, and the atomization energy of O2, ZnO dimer, and wurtzite ZnO. DMC predicts the oxygen vacancy asmore » a deep donor with a formation energy of 5.0(2) eV under O-rich conditions and thermodynamic transition levels located between 1.8 and 2.5 eV from the valence band maximum. Our DMC results indicate that the concentration of zinc interstitial and hydrogen impurities in ZnO should be low under n-type, and Zn- and H-rich conditions because these defects have formation energies above 1.4 eV under these conditions. Comparison of DMC and hybrid functionals shows that these DFT approximations can be parameterized to yield a general correct qualitative description of ZnO. However, the formation energy of defects in ZnO evaluated with DMC and hybrid functionals can differ by more than 0.5 eV.« less
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Chibani, O; Price, R; Ma, C; Eldib, A; Mora, G
2014-06-01
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Gray, Michael A.; Andersen, Gary L.; Mormile, Melanie R.
2014-10-07
Coral disease is one of the major causes of reef degradation. Dark Spot Syndrome (DSS) was described in the early 1990's as brown or purple amorphous areas of tissue on a coral and has since become one of the most prevalent diseases reported on Caribbean reefs. It has been identified in a number of coral species, but there is debate as to whether it is in fact the same disease in different corals. Further, it is questioned whether these macroscopic signs are in fact diagnostic of an infectious disease at all. The most commonly affected species in the Caribbean is the massive starlet coral Siderastrea siderea. We sampled this species in two locations, Dry Tortugas National Park and Virgin Islands National Park. Tissue biopsies were collected from both healthy colonies and those with dark spot lesions. Microbial-community DNA was extracted from coral samples (mucus, tissue, and skeleton), amplified using bacterial-specific primers, and applied to PhyloChip G3 microarrays to examine the bacterial diversity associated with this coral. Samples were also screened for the presence of a fungal ribotype that has recently been implicated as a causative agent of DSS in another coral species, but the amplifications were unsuccessful. S. siderea samples did not cluster consistently based on health state (i.e., normal versus dark spot). Various bacteria, including Cyanobacteria and Vibrios, were observed to have increased relative abundance in the discolored tissue, but the patterns were not consistent across all DSS samples. Overall, our findings do not support the hypothesis that DSS in S. siderea is linked to a bacterial pathogen or pathogens. This dataset provides the most comprehensive overview to date of the bacterial community associated with the scleractinian coral S. siderea.
Sorokin, A. A.; Gottwald, A.; Hoehl, A.; Kroth, U.; Schoeppe, H.; Ulm, G.; Richter, M.; Bobashev, S. V.; Domracheva, I. V.; Smirnov, D. N.; Tiedtke, K.; Duesterer, S.; Feldhaus, J.; Hahn, U.; Jastrow, U.; Kuhlmann, M.; Nunez, T.; Ploenjes, E.; Treusch, R.
2006-11-27
A method has been developed and applied to measure the beam waist and spot size of a focused soft x-ray beam at the free-electron laser FLASH of the Deutsches Elektronen-Synchrotron in Hamburg. The method is based on a saturation effect upon atomic photoionization and represents an indestructible tool for the characterization of powerful beams of ionizing electromagnetic radiation. At the microfocus beamline BL2 at FLASH, a full width at half maximum focus diameter of (15{+-}2) {mu}m was determined.
Harding, R.; Trnková, P.; Lomax, A. J.; Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P.; Short, S. C.; Loughrey, C.; Thwaites, D. I.
2014-11-01
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 34, 5 5, and 2 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
A novel approach in electron beam radiation therapy of lips carcinoma: A Monte Carlo study
Shokrani, Parvaneh; Baradaran-Ghahfarokhi, Milad; Zadeh, Maryam Khorami
2013-04-15
Purpose: Squamous cell carcinoma (SCC) is commonly treated by electron beam radiotherapy (EBRT) followed by a boost via brachytherapy. Considering the limitations associated with brachytherapy, in this study, a novel boosting technique in EBRT of lip carcinoma using an internal shield as an internal dose enhancer tool (IDET) was evaluated. An IDET is referred to a partially covered internal shield located behind the lip. It was intended to show that while the backscattered electrons are absorbed in the portion covered with a low atomic number material, they will enhance the target dose in the uncovered area. Methods: Monte-Carlo models of 6 and 8 MeV electron beams were developed using BEAMnrc code and were validated against experimental measurements. Using the developed models, dose distributions in a lip phantom were calculated and the effect of an IDET on target dose enhancement was evaluated. Typical lip thicknesses of 1.5 and 2.0 cm were considered. A 5 Multiplication-Sign 5 cm{sup 2} of lead covered by 0.5 cm of polystyrene was used as an internal shield, while a 4 Multiplication-Sign 4 cm{sup 2} uncovered area of the shield was used as the dose enhancer. Results: Using the IDET, the maximum dose enhancement as a percentage of dose at d{sub max} of the unshielded field was 157.6% and 136.1% for 6 and 8 MeV beams, respectively. The best outcome was achieved for lip thickness of 1.5 cm and target thickness of less than 0.8 cm. For lateral dose coverage of planning target volume, the 80% isodose curve at the lip-IDET interface showed a 1.2 cm expansion, compared to the unshielded field. Conclusions: This study showed that a boost concomitant EBRT of lip is possible by modifying an internal shield into an IDET. This boosting method is especially applicable to cases in which brachytherapy faces limitations, such as small thicknesses of lips and targets located at the buccal surface of the lip.
Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S.
2012-08-15
This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.
Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.
Zhang, C.Q. Robson, J.D.; Ciuca, O.; Prangnell, P.B.
2014-11-15
Aluminum alloy AA6111 and TiAl6V4 dissimilar alloys were successfully welded by high power ultrasonic spot welding. No visible intermetallic reaction layer was detected in as-welded AA6111/TiAl6V4 welds, even when transmission electron microscopy was used. The effects of welding time and natural aging on peak load and fracture energy were investigated. The peak load and fracture energy of welds increased with an increase in welding time and then reached a plateau. The lap shear strength (peak load) can reach the same level as that of similar Al–Al joints. After natural aging, the fracture mode of welds transferred from ductile fracture of the softened aluminum to interfacial failure due to the strength recovery of AA6111. - Highlights: • Dissimilar Al/Ti welds were produced by high power ultrasonic spot welding. • No visible intermetallic reaction layer was detected on weld interface. • The lap shear strength can reach the same level as that of similar Al–Al joints. • The fracture mode becomes interfacial failure after natural aging.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-01-01
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Avila, Olga; Brandan, Maria-Ester
1998-08-28
A theoretical investigation of thermoluminescence response of Lithium Fluoride after heavy ion irradiation has been performed through Monte Carlo simulation of the energy deposition process. Efficiencies for the total TL signal of LiF irradiated with 0.7, 1.5 and 3 MeV protons and 3, 5.3 and 7.5 MeV helium ions have been calculated using the radial dose distribution profiles obtained from the MC procedure and applying Track Structure Theory and Modified Track Structure Theory. Results were compared with recent experimental data. The models correctly describe the observed decrease in efficiency as a function of the ion LET.
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D'Alessandro, K.; Correa-Alfonso, C. M.; Godoy, W.; Maidana, N. L.; Vanin, V. R.
2013-05-06
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Betzler, Benjamin R.; Kiedrowski, Brian C.; Brown, Forrest B.; Martin, William R.
2015-08-28
The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. In this study, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by m
Hardiansyah, D.; Haryanto, F.; Male, S.
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file is used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.
Jiang, F.-J.; Nyfeler, M.; Kaempfer, F.
2009-07-15
Motivated by the possible mechanism for the pinning of the electronic liquid crystal direction in YBa{sub 2}Cu{sub 3}O{sub 6.45} as proposed by Pardini et al. [Phys. Rev. B 78, 024439 (2008)], we use the first-principles Monte Carlo method to study the spin-(1/2) Heisenberg model with antiferromagnetic couplings J{sub 1} and J{sub 2} on the square lattice. In particular, the low-energy constants spin stiffness {rho}{sub s}, staggered magnetization M{sub s}, and spin wave velocity c are determined by fitting the Monte Carlo data to the predictions of magnon chiral perturbation theory. Further, the spin stiffnesses {rho}{sub s1} and {rho}{sub s2} as a function of the ratio J{sub 2}/J{sub 1} of the couplings are investigated in detail. Although we find a good agreement between our results with those obtained by the series expansion method in the weakly anisotropic regime, for strong anisotropy we observe discrepancies.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Rota, R.; Casulleras, J.; Mazzanti, F.; Boronat, J.
2015-03-21
We present a method based on the path integral Monte Carlo formalism for the calculation of ground-state time correlation functions in quantum systems. The key point of the method is the consideration of time as a complex variable whose phase δ acts as an adjustable parameter. By using high-order approximations for the quantum propagator, it is possible to obtain Monte Carlo data all the way from purely imaginary time to δ values near the limit of real time. As a consequence, it is possible to infer accurately the spectral functions using simple inversion algorithms. We test this approach in the calculation of the dynamic structure function S(q, ω) of two one-dimensional model systems, harmonic and quartic oscillators, for which S(q, ω) can be exactly calculated. We notice a clear improvement in the calculation of the dynamic response with respect to the common approach based on the inverse Laplace transform of the imaginary-time correlation function.
Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.
Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations
Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.
2010-12-22
In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientific questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.
Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types
Muir, B. R.; Rogers, D. W. O.
2014-11-01
Purpose: To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers effective point of measurement (EPOM) and beam quality conversion factors. Methods: The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R{sub 50} converted from I{sub 50} (calculated using ion chamber simulations in phantom) to R{sub 50} calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. Results: For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, k{sub Q}, as a function of R{sub 50}. The optimal shift of cylindrical chambers is found to be less than the 0.5 r{sub cav} recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 r{sub cav}. Values of k{sub ecal} are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R{sub 50} = 7.5 cm (k{sub Q}{sup ?}) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. Conclusions: These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.
Faught, A; Davidson, S; Kry, S; Ibbott, G; Followill, D; Fontenot, J; Etzel, C
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeV energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 4040cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 33cm2 to 3030cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a quality assurance tool for clinical trials.
SU-D-19A-03: Monte Carlo Investigation of the Mobetron to Perform Modulated Electron Beam Therapy
Emam, I; Eldib, A; Hosini, M; AlSaeed, E; Ma, C
2014-06-01
Purpose: Modulated electron radiotherapy (MERT) has been proposed as a mean of delivering conformal dose to shallow tumors while sparing distal structures and surrounding tissues. In intraoperative radiotherapy (IORT) utilizing Mobetron, an applicator is placed as closely as possible to the suspected cancerous tissues to be treated. In this study we investigate the characteristics of Mobetron electron beams collimated by an in-house prospective electron multileaf collimator (eMLC) and its feasibility for MERT. Methods: IntraOp Mobetron dedicated to perform radiotherapy during surgery was used in the study. It provides several energies (6, 9 and 12 MeV). Dosimetry measurements were performed to obtain percentage depth dose curves (PDD) and profiles for a 10-cm diameter applicator using the PTW MP3/XS 3D-scanning system and the semiflex ion chamber. MCBEAM/MCSIM Monte Carlo codes were used for the treatment head simulation and phantom dose calculation. The design of an electron beam collimation by an eMLC attached to the Mobetron head was also investigated using Monte Carlo simulations. Isodose distributions resulting from eMLC collimated beams were compared to that collimated using cutouts. The design for our Mobetron eMLC is based on our previous experiences with eMLCs designed for clinical linear accelerators. For Mobetron the eMLC is attached to the end of a spacer-mounted rectangular applicator at 50 cm SSD. Steel will be used as the leaf material because other materials would be toxic and will not be suitable for intraoperative applications. Results: Good agreement (within 2%) was achieved between measured and calculated PDD curves and profiles for all available energies. Dose distributiosn provided by the eMLC showed reasonable agreement (?3%/1mm) with those obtained by conventional cutouts. Conclusion: Monte Carlo simulations are capable of modeling Mobetron electron beams with a reliable accuracy. An eMLC attached to the Mobteron treatment head will allow better treatment options with those machines.
Leon, Stephanie M. Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.280.51, with absolute differences averaging ?0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain
2009-01-05
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Sarrut, David; Universit Lyon 1; Centre Lon Brard ; Bardis, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Ltang, Jean-Michel; Jan, Sbastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Böcklin, Christoph Baumann, Dirk; Fröhlich, Jürg
2014-02-14
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithm works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Quantum Monte Carlo Study of the Ground-State Properties of a Fermi Gas in the BCS-BEC Crossover
Giorgini, S.; Astrakharchik, G. E.; Boronat, J.; Casulleras, J.
2006-11-07
The ground-state properties of a two-component Fermi gas with attractive short-range interactions are calculated using the fixed-node diffusion Monte Carlo method. The interaction strength is varied over a wide range by tuning the value of the s-wave scattering length of the two-body potential. We calculate the ground-state energy per particle and we characterize the equation of state of the system. Off-diagonal long-range order is investigated through the asymptotic behavior of the two-body density matrix. The condensate fraction of pairs is calculated in the unitary limit and on both sides of the BCS-BEC crossover.
Looking for Auger signatures in III-nitride light emitters: A full-band Monte Carlo perspective
Bertazzi, Francesco Goano, Michele; Zhou, Xiangyu; Calciati, Marco; Ghione, Giovanni; Matsubara, Masahiko; Bellotti, Enrico
2015-02-09
Recent experiments of electron emission spectroscopy (EES) on III-nitride light-emitting diodes (LEDs) have shown a correlation between droop onset and hot electron emission at the cesiated surface of the LED p-cap. The observed hot electrons have been interpreted as a direct signature of Auger recombination in the LED active region, as highly energetic Auger-excited electrons would be collected in long-lived satellite valleys of the conduction band so that they would not decay on their journey to the surface across the highly doped p-contact layer. We discuss this interpretation by using a full-band Monte Carlo model based on first-principles electronic structure and lattice dynamics calculations. The results of our analysis suggest that Auger-excited electrons cannot be unambiguously detected in the LED structures used in the EES experiments. Additional experimental and simulative work are necessary to unravel the complex physics of GaN cesiated surfaces.
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
Zink, K.; Czarnecki, D.; Voigts-Rhetz, P. von; Looe, H. K.; Harder, D.
2014-11-01
Purpose: The electron fluence inside a parallel-plate ionization chamber positioned in a water phantom and exposed to a clinical electron beam deviates from the unperturbed fluence in water in absence of the chamber. One reason for the fluence perturbation is the well-known inscattering effect, whose physical cause is the lack of electron scattering in the gas-filled cavity. Correction factors determined to correct for this effect have long been recommended. However, more recent Monte Carlo calculations have led to some doubt about the range of validity of these corrections. Therefore, the aim of the present study is to reanalyze the development of the fluence perturbation with depth and to review the function of the guard rings. Methods: Spatially resolved Monte Carlo simulations of the dose profiles within gas-filled cavities with various radii in clinical electron beams have been performed in order to determine the radial variation of the fluence perturbation in a coin-shaped cavity, to study the influences of the radius of the collecting electrode and of the width of the guard ring upon the indicated value of the ionization chamber formed by the cavity, and to investigate the development of the perturbation as a function of the depth in an electron-irradiated phantom. The simulations were performed for a primary electron energy of 6 MeV. Results: The Monte Carlo simulations clearly demonstrated a surprisingly large in- and outward electron transport across the lateral cavity boundary. This results in a strong influence of the depth-dependent development of the electron field in the surrounding medium upon the chamber reading. In the buildup region of the depth-dose curve, the inout balance of the electron fluence is positive and shows the well-known dose oscillation near the cavity/water boundary. At the depth of the dose maximum the inout balance is equilibrated, and in the falling part of the depth-dose curve it is negative, as shown here the first time. The influences of both the collecting electrode radius and the width of the guard ring are reflecting the deep radial penetration of the electron transport processes into the gas-filled cavities and the need for appropriate corrections of the chamber reading. New values for these corrections have been established in two forms, one converting the indicated value into the absorbed dose to water in the front plane of the chamber, the other converting it into the absorbed dose to water at the depth of the effective point of measurement of the chamber. In the Appendix, the inout imbalance of electron transport across the lateral cavity boundary is demonstrated in the approximation of classical small-angle multiple scattering theory. Conclusions: The inout electron transport imbalance at the lateral boundaries of parallel-plate chambers in electron beams has been studied with Monte Carlo simulation over a range of depth in water, and new correction factors, covering all depths and implementing the effective point of measurement concept, have been developed.
Cashmore, Jason; Golubev, Sergey; Dumont, Jose Luis; Sikora, Marcin; Alber, Markus; Ramtohul, Mark
2012-06-15
Purpose: A linac delivering intensity-modulated radiotherapy (IMRT) can benefit from a flattening filter free (FFF) design which offers higher dose rates and reduced accelerator head scatter than for conventional (flattened) delivery. This reduction in scatter simplifies beam modeling, and combining a Monte Carlo dose engine with a FFF accelerator could potentially increase dose calculation accuracy. The objective of this work was to model a FFF machine using an adapted version of a previously published virtual source model (VSM) for Monte Carlo calculations and to verify its accuracy. Methods: An Elekta Synergy linear accelerator operating at 6 MV has been modified to enable irradiation both with and without the flattening filter (FF). The VSM has been incorporated into a commercially available treatment planning system (Monaco Trade-Mark-Sign v 3.1) as VSM 1.6. Dosimetric data were measured to commission the treatment planning system (TPS) and the VSM adapted to account for the lack of angular differential absorption and general beam hardening. The model was then tested using standard water phantom measurements and also by creating IMRT plans for a range of clinical cases. Results: The results show that the VSM implementation handles the FFF beams very well, with an uncertainty between measurement and calculation of <1% which is comparable to conventional flattened beams. All IMRT beams passed standard quality assurance tests with >95% of all points passing gamma analysis ({gamma} < 1) using a 3%/3 mm tolerance. Conclusions: The virtual source model for flattened beams was successfully adapted to a flattening filter free beam production. Water phantom and patient specific QA measurements show excellent results, and comparisons of IMRT plans generated in conventional and FFF mode are underway to assess dosimetric uncertainties and possible improvements in dose calculation and delivery.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Tracking in full Monte Carlo detector simulations of 500 GeV e{sup +}e{sup {minus}} collisions
Ronan, M.T.
2000-03-01
In full Monte Carlo simulation models of future Linear Collider detectors, charged tracks are reconstructed from 3D space points in central tracking detectors. The track reconstruction software is being developed for detailed physics studies that take realistic detector resolution and background modeling into account. At this stage of the analysis, reference tracking efficiency and resolutions for ideal detector conditions are presented. High performance detectors are being designed to carry out precision studies of e{sup +}e{sup {minus}} annihilation events in the energy range of 500 GeV to 1.5 TeV. Physics processes under study include Higgs mass and branching ratio measurements, measurement of possible manifestations of Supersymmetry (SUSY), precision Electro-Weak (EW) studies and searches for new phenomena beyond their current expectations. The relatively-low background machine environment at future Linear Colliders will allow precise measurements if proper consideration is given to the effects of the backgrounds on these studies. In current North American design studies, full Monte Carlo detector simulation and analysis is being used to allow detector optimization taking into account realistic models of machine backgrounds. In this paper the design of tracking software that is being developed for full detector reconstruction is discussed. In this study, charged tracks are found from simulated space point hits allowing for the straight-forward addition of background hits and for the accounting of missing information. The status of the software development effort is quantified by some reference performance measures, which will be modified by future work to include background effects.
Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.; Tenreiro, Claudio
2010-10-15
Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters of AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.
TU-F-18A-03: Improving Tissue Segmentation for Monte Carlo Dose Calculation Using DECT Data
Di, Salvio A; Bedwani, S; Carrier, J
2014-06-15
Purpose: To develop a new segmentation technique using dual energy CT (DECT) to overcome limitations related to segmentation from a standard Hounsfield unit (HU) to electron density (ED) calibration curve. Both methods are compared with a Monte Carlo analysis of dose distribution. Methods: DECT allows a direct calculation of both ED and effective atomic number (EAN) within a given voxel. The EAN is here defined as a function of the total electron cross-section of a medium. These values can be effectively acquired using a calibrated method from scans at two different energies. A prior stoichiometric calibration on a Gammex RMI phantom allows us to find the parameters to calculate EAN and ED within a voxel. Scans from a Siemens SOMATOM Definition Flash dual source system provided the data for our study. A Monte Carlo analysis compares dose distribution simulated by dosxyz-nrc, considering a head phantom defined by both segmentation techniques. Results: Results from depth dose and dose profile calculations show that materials with different atomic compositions but similar EAN present differences of less than 1%. Therefore, it is possible to define a short list of basis materials from which density can be adapted to imitate interaction behavior of any tissue. Comparison of the dose distributions on both segmentations shows a difference of 50% in dose in areas surrounding bone at low energy. Conclusion: The presented segmentation technique allows a more accurate medium definition in each voxel, especially in areas of tissue transition. Since the behavior of human tissues is highly sensitive at low energies, this reduces the errors on calculated dose distribution. This method could be further developed to optimize the tissue characterization based on anatomic site.
Zheng, Y; Singh, H; Islam, M
2014-06-01
Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to a 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.
Retraying and revamp double big LPG fractionators's capacity
Sasson, R. , Friendswood, TX ); Pate, R. )
1993-08-02
Enterprise operates two LPG fractionation units at Mont Belvieu: the Seminole unit and the West Texas unit. In 1985, Nye Engineering Inc., Friendswood, Texas, designed improvements to expand the Seminole plant from 60,000 b/d of C[sub 2] + feed to 90,000 b/d. The primary modifications made to increase the West Texas plant's capacity and reduce fuel consumption were the following: retraying the deethanizer and depropanizer columns with new High Capacity Nye Trays. Lowering the pressure in the de-ethanizer and depropanizer to improve the separating efficiency of the columns. Replacing the debutanizer with a high-pressure column that rejects its condensing heat as reboil for the de-ethanizer. Adjusting the feed temperature to balance the load in the top and bottom of the depropanizer column to prevent premature flooding in one section of the tower. Installing convection heaters to recover existing stack gas heat into the process. In conjunction with the capacity expansion, there was a strong incentive to improve the fuel efficiency of the unit. The modifications are described.
Chibani, Omar C-M Ma, Charlie
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR brachytherapy planning.
Scaillet, S.; Feraud, G. ); Lagabrielle, Y. ); Ballevre, M.; Ruffet, G. )
1990-08-01
{sup 40}Ar/{sup 39}Ar laser-probe dating of phengitic micas has been carried out by step-heating and spot-fusion procedures. These micas represent successive deformation stages in the structural evolution of the internal Dora Maira nappe, western Alps. Single phengites from a gneiss affected by a single ductile strain under retrogressive conditions (sample 99.1) display complete isotopic resetting with nearly homogeneous intracrystalline Ar distribution and yield plateau ages of about 40 Ma. Small clusters of phengites from an earlier foliation were selected from a polydeformed mica schist (sample PTX3). They show a partial isotopic resetting in response to overprinting during the retrogressive deformation stage with a concentric age zoning from 68 Ma on the rim to 87 Ma in the core one cleavage plane. This zonation is fully consistent with the laser-derived discordant age spectrum, which ranges from 40 to 90 Ma from low to high temperatures. According to the deformation history of both samples, these preliminary data suggest a deformation control on Ar migration during recrystallization processes, and they are consistent with the timing of the collisional evolution previously reported for southern Dora Maira units. This study shows that the {sup 40}Ar/{sup 39}Ar continuous laser-probe dating technique produces data accurate enough to discriminate several tectonometamorphic episodes recorded in single hand samples.
Prasad, Manish; Conforti, Patrick F.; Garrison, Barbara J.
2007-08-28
The coarse grained chemical reaction model is enhanced to build a molecular dynamics (MD) simulation framework with an embedded Monte Carlo (MC) based reaction scheme. The MC scheme utilizes predetermined reaction chemistry, energetics, and rate kinetics of materials to incorporate chemical reactions occurring in a substrate into the MD simulation. The kinetics information is utilized to set the probabilities for the types of reactions to perform based on radical survival times and reaction rates. Implementing a reaction involves changing the reactants species types which alters their interaction potentials and thus produces the required energy change. We discuss the application of this method to study the initiation of ultraviolet laser ablation in poly(methyl methacrylate). The use of this scheme enables the modeling of all possible photoexcitation pathways in the polymer. It also permits a direct study of the role of thermal, mechanical, and chemical processes that can set off ablation. We demonstrate that the role of laser induced heating, thermomechanical stresses, pressure wave formation and relaxation, and thermochemical decomposition of the polymer substrate can be investigated directly by suitably choosing the potential energy and chemical reaction energy landscape. The results highlight the usefulness of such a modeling approach by showing that various processes in polymer ablation are intricately linked leading to the transformation of the substrate and its ejection. The method, in principle, can be utilized to study systems where chemical reactions are expected to play a dominant role or interact strongly with other physical processes.
Lin, J. Y. Y. [California Institute of Technology, Pasadena] [California Institute of Technology, Pasadena; Aczel, Adam A [ORNL] [ORNL; Abernathy, Douglas L [ORNL] [ORNL; Nagler, Stephen E [ORNL] [ORNL; Buyers, W. J. L. [National Research Council of Canada] [National Research Council of Canada; Granroth, Garrett E [ORNL] [ORNL
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Many-body ab-initio diffusion quantum Monte Carlo applied to the strongly correlated oxide NiO
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mitra, Chandrima; Krogel, Jaron T.; Santana, Juan A.; Reboredo, Fernando A.
2015-10-28
We present a many-body diffusion quantum Monte Carlo (DMC) study of the bulk and defect properties of NiO. We find excellent agreement with experimental values, within 0.3%, 0.6%, and 3.5% for the lattice constant, cohesive energy, and bulk modulus, respectively. The quasiparticle bandgap was also computed, and the DMC result of 4.72 (0.17) eV compares well with the experimental value of 4.3 eV. Furthermore, DMC calculations of excited states at the L, Z, and the gamma point of the Brillouin zone reveal a flat upper valence band for NiO, in good agreement with Angle Resolved Photoemission Spectroscopy results. To studymore » defect properties, we evaluated the formation energies of the neutral and charged vacancies of oxygen and nickel in NiO. A formation energy of 7.2 (0.15) eV was found for the oxygen vacancy under oxygen rich conditions. For the Ni vacancy, we obtained a formation energy of 3.2 (0.15) eV under Ni rich conditions. These results confirm that NiO occurs as a p-type material with the dominant intrinsic vacancy defect being Ni vacancy.« less
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
G. S. Chang; R. C. Pederson
2005-07-01
Mixed oxide (MOX) test capsules prepared with weapons-derived plutonium have been irradiated to a burnup of 50 GWd/t. The MOX fuel was fabricated at Los Alamos National Laboratory by a master-mix process and has been irradiated in the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL). Previous withdrawals of the same fuel have occurred at 9, 21, 30, and 40 GWd/t. Oak Ridge National Laboratory (ORNL) manages this test series for the Department of Energys Fissile Materials Disposition Program (FMDP). The fuel burnup analyses presented in this study were performed using MCWO, a welldeveloped tool that couples the Monte Carlo transport code MCNP with the isotope depletion and buildup code ORIGEN-2. MCWO analysis yields time-dependent and neutron-spectrum-dependent minor actinide and Pu concentrations for the ATR small I-irradiation test position. The purpose of this report is to validate both the Weapons-Grade Mixed Oxide (WG-MOX) test assembly model and the new fuel burnup analysis methodology by comparing the computed results against the neutron monitor measurements.
Reverse Monte Carlo simulation of Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} glasses
Abdel-Baset, A. M.; Rashad, M.; Moharram, A. H.
2013-12-16
Two-dimensional Monte Carlo of the total pair distribution functions g(r) is determined for Se{sub 80}Te{sub 20} and Se{sub 80}Te{sub 15}Sb{sub 5} alloys, and then it used to assemble the three-dimensional atomic configurations using the reverse Monte Carlo simulation. The partial pair distribution functions g{sub ij}(r) indicate that the basic structure unit in the Se{sub 80}Te{sub 15}Sb{sub 5} glass is di-antimony tri-selenide units connected together through Se-Se and Se-Te chain. The structure of Se{sub 80}Te{sub 20} alloys is a chain of Se-Te and Se-Se in addition to some rings of Se atoms.
Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Regions for Select Spot Prices
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
are used to represent the following regions: Region Gas Point Used Power Point Used New England Algonquin Citygate Massachusetts Hub (ISONE) New York City Transco Zone 6-NY...
SPOT Suite Transforms Beamline Science
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
drive, import it to his computer and manually manipulate and reconstruct it. Because each image file was about 20 gigabytes, this process was very time-intensive. And while he was...
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A.
2012-07-01
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Talamo, A.; Gohar, Y. (Nuclear Engineering Division) [Nuclear Engineering Division
2011-05-12
This study investigates the performance of the YALINA Booster subcritical assembly, located in Belarus, during operation with high (90%), medium (36%), and low (21%) enriched uranium fuels in the assembly's fast zone. The YALINA Booster is a zero-power, subcritical assembly driven by a conventional neutron generator. It was constructed for the purpose of investigating the static and dynamic neutronics properties of accelerator driven subcritical systems, and to serve as a fast neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinides. The first part of this study analyzes the assembly's performance with several fuel types. The MCNPX and MONK Monte Carlo codes were used to determine effective and source neutron multiplication factors, effective delayed neutron fraction, prompt neutron lifetime, neutron flux profiles and spectra, and neutron reaction rates produced from the use of three neutron sources: californium, deuterium-deuterium, and deuterium-tritium. In the latter two cases, the external neutron source operates in pulsed mode. The results discussed in the first part of this report show that the use of low enriched fuel in the fast zone of the assembly diminishes neutron multiplication. Therefore, the discussion in the second part of the report focuses on finding alternative fuel loading configurations that enhance neutron multiplication while using low enriched uranium fuel. It was found that arranging the interface absorber between the fast and the thermal zones in a circular rather than a square array is an effective method of operating the YALINA Booster subcritical assembly without downgrading neutron multiplication relative to the original value obtained with the use of the high enriched uranium fuels in the fast zone.
Wang, J.; Biasca, R.; Liewer, P.C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper the authors move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In their model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location. 32 refs., 8 figs., 2 tabs.
Broader source: Energy.gov [DOE]
This meeting is open to the public, and the board will discuss the Oak Ridge Environmental Management program's FY 2016 budget and prioritization.
Chorin, Alexandre J.
2007-12-12
A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Xu, Y; Bai, T; Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X; Zhou, L
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)
MO-G-BRF-05: Determining Response to Anti-Angiogenic Therapies with Monte Carlo Tumor Modeling
Valentinuzzi, D; Simoncic, U; Jeraj, R; Titz, B
2014-06-15
Purpose: Patient response to anti-angiogenic therapies with vascular endothelial growth factor receptor - tyrosine kinase inhibitors (VEGFR TKIs) is heterogeneous. This study investigates key biological characteristics that drive differences in patient response via Monte Carlo computational modeling capable of simulating tumor response to therapy with VEGFR TKI. Methods: VEGFR TKIs potently block receptors, responsible for promoting angiogenesis in tumors. The model incorporates drug pharmacokinetic and pharmacodynamic properties, as well as patientspecific data of cellular proliferation derived from [18F]FLT-PET data. Sensitivity of tumor response was assessed for multiple parameters, including initial partial oxygen tension (pO{sub 2}), cell cycle time, daily vascular growth fraction, and daily vascular regression fraction. Results were benchmarked to clinical data (patient 2 weeks on VEGFR TKI, followed by 1-week drug holiday). The tumor pO{sub 2} was assumed to be uniform. Results: Among the investigated parameters, the simulated proliferation was most sensitive to the initial tumor pO{sub 2}. Initial change of 5 mmHg can already Result in significantly different levels of proliferation. The model reveals that hypoxic tumors (pO{sub 2} ? 20 mmHg) show the highest decrease of proliferation, experiencing mean FLT standardized uptake value (SUVmean) decrease for at least 50% at the end of the clinical trial (day 21). Oxygenated tumors (pO{sub 2} 20 mmHg) show a transient SUV decrease (3050%) at the end of the treatment with VEGFR TKI (day 14) but experience a rapid SUV rebound close to the pre-treatment SUV levels (70110%) at the time of a drug holiday (day 1421) - the phenomenon known as a proliferative flare. Conclusion: Model's high sensitivity to initial pO{sub 2} clearly emphasizes the need for experimental assessment of the pretreatment tumor hypoxia status, as it might be predictive of response to antiangiogenic therapies and the occurrence of proliferative flare. Experimental assessment of other model parameters would further improve understanding of patient response.
SU-E-T-585: Commissioning of Electron Monte Carlo in Eclipse Treatment Planning System for TrueBeam
Yang, X; Lasio, G; Zhou, J; Lin, M; Yi, B; Guerrero, M
2014-06-01
Purpose: To commission electron Monte Carlo (eMC) algorithm in Eclipse Treatment Planning System (TPS) for TrueBeam Linacs, including the evaluation of dose calculation accuracy for small fields and oblique beams and comparison with the existing eMC model for Clinacs. Methods: Electron beam percent-depth-dose (PDDs) and profiles with and without applicators, as well as output factors, were measured from two Varian TrueBeam machines. Measured data were compared against the Varian TrueBeam Representative Beam Data (VTBRBD). The selected data set was transferred into Eclipse for beam configuration. Dose calculation accuracy from eMC was evaluated for open fields, small cut-out fields, and oblique beams at different incident angles. The TrueBeam data was compared to the existing Clinac data and eMC model to evaluate the differences among Linac types. Results: Our measured data indicated that electron beam PDDs from our TrueBeam machines are well matched to those from our Varian Clinac machines, but in-air profiles, cone factors and open-filed output factors are significantly different. The data from our two TrueBeam machines were well represented by the VTBRBD. Variations of TrueBeam PDDs and profiles were within the 2% /2mm criteria for all energies, and the output factors for fields with and without applicators all agree within 2%. Obliquity factor for two clinically relevant applicator sizes (1010 and 1515 cm{sup 2}) and three oblique angles (15, 30, and 45 degree) were measured for nominal R100, R90, and R80 of each electron beam energy. Comparisons of calculations using eMC of obliquity factors and cut-out factors versus measurements will be presented. Conclusion: eMC algorithm in Eclipse TPS can be configured using the VTBRBD. Significant differences between TrueBeam and Clinacs were found in in-air profiles and open field output factors. The accuracy of the eMC algorithm was evaluated for a wide range of cut-out factors and oblique incidence.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J
2014-06-15
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.
Mehranian, A.; Ay, M. R.; Alam, N. Riyahi; Zaidi, H.
2010-02-15
Purpose: The accurate prediction of x-ray spectra under typical conditions encountered in clinical x-ray examination procedures and the assessment of factors influencing them has been a long-standing goal of the diagnostic radiology and medical physics communities. In this work, the influence of anode surface roughness on diagnostic x-ray spectra is evaluated using MCNP4C-based Monte Carlo simulations. Methods: An image-based modeling method was used to create realistic models from surface-cracked anodes. An in-house computer program was written to model the geometric pattern of cracks and irregularities from digital images of focal track surface in order to define the modeled anodes into MCNP input file. To consider average roughness and mean crack depth into the models, the surface of anodes was characterized by scanning electron microscopy and surface profilometry. It was found that the average roughness (R{sub a}) in the most aged tube studied is about 50 {mu}m. The correctness of MCNP4C in simulating diagnostic x-ray spectra was thoroughly verified by calling its Gaussian energy broadening card and comparing the simulated spectra with experimentally measured ones. The assessment of anode roughness involved the comparison of simulated spectra in deteriorated anodes with those simulated in perfectly plain anodes considered as reference. From these comparisons, the variations in output intensity, half value layer (HVL), heel effect, and patient dose were studied. Results: An intensity loss of 4.5% and 16.8% was predicted for anodes aged by 5 and 50 {mu}m deep cracks (50 kVp, 6 deg. target angle, and 2.5 mm Al total filtration). The variations in HVL were not significant as the spectra were not hardened by more than 2.5%; however, the trend for this variation was to increase with roughness. By deploying several point detector tallies along the anode-cathode direction and averaging exposure over them, it was found that for a 6 deg. anode, roughened by 50 {mu}m deep cracks, the reduction in exposure is 14.9% and 13.1% for 70 and 120 kVp tube voltages, respectively. For the evaluation of patient dose, entrance skin radiation dose was calculated for typical chest x-ray examinations. It was shown that as anode roughness increases, patient entrance skin dose decreases averagely by a factor of 15%. Conclusions: It was concluded that the anode surface roughness can have a non-negligible effect on output spectra in aged x-ray imaging tubes and its impact should be carefully considered in diagnostic x-ray imaging modalities.
Muir, B. R. Rogers, D. W. O.
2013-12-15
Purpose: To investigate recommendations for reference dosimetry of electron beams and gradient effects for the NE2571 chamber and to provide beam quality conversion factors using Monte Carlo simulations of the PTW Roos and NE2571 ion chambers. Methods: The EGSnrc code system is used to calculate the absorbed dose-to-water and the dose to the gas in fully modeled ion chambers as a function of depth in water. Electron beams are modeled using realistic accelerator simulations as well as beams modeled as collimated point sources from realistic electron beam spectra or monoenergetic electrons. Beam quality conversion factors are calculated with ratios of the doses to water and to the air in the ion chamber in electron beams and a cobalt-60 reference field. The overall ion chamber correction factor is studied using calculations of water-to-air stopping power ratios. Results: The use of an effective point of measurement shift of 1.55 mm from the front face of the PTW Roos chamber, which places the point of measurement inside the chamber cavity, minimizes the difference betweenR{sub 50}, the beam quality specifier, calculated from chamber simulations compared to that obtained using depth-dose calculations in water. A similar shift minimizes the variation of the overall ion chamber correction factor with depth to the practical range and reduces the root-mean-square deviation of a fit to calculated beam quality conversion factors at the reference depth as a function of R{sub 50}. Similarly, an upstream shift of 0.34 r{sub cav} allows a more accurate determination of R{sub 50} from NE2571 chamber calculations and reduces the variation of the overall ion chamber correction factor with depth. The determination of the gradient correction using a shift of 0.22 r{sub cav} optimizes the root-mean-square deviation of a fit to calculated beam quality conversion factors if all beams investigated are considered. However, if only clinical beams are considered, a good fit to results for beam quality conversion factors is obtained without explicitly correcting for gradient effects. The inadequacy of R{sub 50} to uniquely specify beam quality for the accurate selection of k{sub Q} factors is discussed. Systematic uncertainties in beam quality conversion factors are analyzed for the NE2571 chamber and amount to between 0.4% and 1.2% depending on assumptions used. Conclusions: The calculated beam quality conversion factors for the PTW Roos chamber obtained here are in good agreement with literature data. These results characterize the use of an NE2571 ion chamber for reference dosimetry of electron beams even in low-energy beams.
MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics
Pater, P; Vallieres, M; Seuntjens, J
2014-06-15
Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dose deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)
Chrissanthopoulos, A.; Jovari, P.; Kaban, I.; Gruner, S.; Kavetskyy, T.; Borc, J.; Wang, W.; Ren, J.; Chen, G.; Yannopoulos, S.N.
2012-08-15
We report an investigation of the structure and vibrational modes of Ge-In-S-AgI bulk glasses using X-ray diffraction, EXAFS spectroscopy, Reverse Monte-Carlo (RMC) modelling, Raman spectroscopy, and density functional theoretical (DFT) calculations. The combination of these techniques made it possible to elucidate the short- and medium-range structural order of these glasses. Data interpretation revealed that the AgI-free glass structure is composed of a network where GeS{sub 4/2} tetrahedra are linked with trigonal InS{sub 3/2} units; S{sub 3/2}Ge-GeS{sub 3/2} ethane-like species linked with InS{sub 4/2}{sup -} tetrahedra form sub-structures which are dispersed in the network structure. The addition of AgI into the Ge-In-S glassy matrix causes appreciable structural changes, enriching the Indium species with Iodine terminal atoms. The existence of trigonal species InS{sub 2/2}I and tetrahedral units InS{sub 3/2}I{sup -} and InS{sub 2/2}I{sub 2}{sup -} is compatible with the EXAFS and RMC analysis. Their vibrational properties (harmonic frequencies and Raman activities) calculated by DFT are in very good agreement with the experimental values determined by Raman spectroscopy. - Graphical abstract: Experiment (XRD, EXAFS, RMC, Raman scattering) and density functional calculations are employed to study the structure of AgI-doped Ge-In-S glasses. The role of mixed structural units as illustrated in the figure is elucidated. Highlights: Black-Right-Pointing-Pointer Doping Ge-In-S glasses with AgI causes significant changes in glass structure. Black-Right-Pointing-Pointer Experiment and DFT are combined to elucidate short- and medium-range structural order. Black-Right-Pointing-Pointer Indium atoms form both (InS{sub 4/2}){sup -} tetrahedra and InS{sub 3/2} planar triangles. Black-Right-Pointing-Pointer (InS{sub 4/2}){sup -} tetrahedra bond to (S{sub 3/2}Ge-GeS{sub 3/2}){sup 2+} ethane-like units forming neutral sub-structures. Black-Right-Pointing-Pointer Mixed chalcohalide species (InS{sub 3/2}I){sup -} offer vulnerable sites for the uptake of Ag{sup +}.
Spadea, Maria Francesca; Verburg, Joost Mathias; Seco, Joao; Baroni, Guido
2014-01-15
Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (P?{sub <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%25% in the region surrounding the metal and over dosage of 10%15% downstream of the hardware. Gamma index test yielded P?{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < P?{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.
Ronan, M.T.
2000-03-03
In full Monte Carlo simulation models of future Linear Collider detectors, reconstructed charged tracks and calorimeter clusters are used to perform a complete reconstruction of exclusive W{sup +}W{sup {minus}} production. The event reconstruction and analysis Java software is being developed for detailed physics studies that take realistic detector resolution and background modeling into account. Studies of track-cluster association and jet energy flow for two detector models are discussed. At this stage of the analysis, reference W-boson mass distributions for ideal detector conditions are presented.
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se, Med. Phys. 39(1), 308319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
Mayorga, P. A.; Departamento de Fsica Atmica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada ; Brualla, L.; Sauerwein, W.; Lallena, A. M.
2014-01-15
Purpose: Retinoblastoma is the most common intraocular malignancy in the early childhood. Patients treated with external beam radiotherapy respond very well to the treatment. However, owing to the genotype of children suffering hereditary retinoblastoma, the risk of secondary radio-induced malignancies is high. The University Hospital of Essen has successfully treated these patients on a daily basis during nearly 30 years using a dedicated D-shaped collimator. The use of this collimator that delivers a highly conformed small radiation field, gives very good results in the control of the primary tumor as well as in preserving visual function, while it avoids the devastating side effects of deformation of midface bones. The purpose of the present paper is to propose a modified version of the D-shaped collimator that reduces even further the irradiation field with the scope to reduce as well the risk of radio-induced secondary malignancies. Concurrently, the new dedicated D-shaped collimator must be easier to build and at the same time produces dose distributions that only differ on the field size with respect to the dose distributions obtained by the current collimator in use. The scope of the former requirement is to facilitate the employment of the authors' irradiation technique both at the authors' and at other hospitals. The fulfillment of the latter allows the authors to continue using the clinical experience gained in more than 30 years. Methods: The Monte Carlo codePENELOPE was used to study the effect that the different structural elements of the dedicated D-shaped collimator have on the absorbed dose distribution. To perform this study, the radiation transport through a Varian Clinac 2100 C/D operating at 6 MV was simulated in order to tally phase-space files which were then used as radiation sources to simulate the considered collimators and the subsequent dose distributions. With the knowledge gained in that study, a new, simpler, D-shaped collimator is proposed. Results: The proposed collimator delivers a dose distribution which is 2.4 cm wide along the inferior-superior direction of the eyeball. This width is 0.3 cm narrower than that of the dose distribution obtained with the collimator currently in clinical use. The other relevant characteristics of the dose distribution obtained with the new collimator, namely, depth doses at clinically relevant positions, penumbrae width, and shape of the lateral profiles, are statistically compatible with the results obtained for the collimator currently in use. Conclusions: The smaller field size delivered by the proposed collimator still fully covers the planning target volume with at least 95% of the maximum dose at a depth of 2 cm and provides a safety margin of 0.2 cm, so ensuring an adequate treatment while reducing the irradiated volume.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ganesh, Panchapakesan; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R
2014-01-01
Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges suchmore » as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.« less
Thfoin, I. Reverdin, C.; Duval, A.; Leboeuf, X.; Lecherbourg, L.; Ross, B.; Hulin, S.; Batani, D.; Santos, J. J.; Vaisseau, X.; Fourment, C.; Giuffrida, L.; Szabo, C. I.; Bastiani-Ceccotti, S.; Brambrink, E.; Koenig, M.; Nakatsutsumi, M.; Morace, A.
2014-11-15
Transmission crystal spectrometers (TCS) are used on many laser facilities to record hard X-ray spectra. During experiments, signal recorded on imaging plates is often degraded by a background noise. Monte-Carlo simulations made with the code GEANT4 show that this background noise is mainly generated by diffusion of MeV electrons and very hard X-rays. An experiment, carried out at LULI2000, confirmed that the use of magnets in front of the diagnostic, that bent the electron trajectories, reduces significantly this background. The new spectrometer SPECTIX (Spectromtre PETAL Cristal en TransmIssion X), built for the LMJ/PETAL facility, will include this optimized shielding.
Ryabtsev, I. I.; Tretyakov, D. B.; Beterov, I. I.; Entin, V. M.; Yakshina, E. A.
2010-11-15
Results of numerical Monte Carlo simulations for the Stark-tuned Fo{center_dot}{center_dot}rster resonance and dipole blockade between two to five cold rubidium Rydberg atoms in various spatial configurations are presented. The effects of the atoms' spatial uncertainties on the resonance amplitude and spectra are investigated. The feasibility of observing coherent Rabi-like population oscillations at a Fo{center_dot}{center_dot}rster resonance between two cold Rydberg atoms is analyzed. Spectra and the fidelity of the Rydberg dipole blockade are calculated for various experimental conditions, including nonzero detuning from the Fo{center_dot}{center_dot}rster resonance and finite laser linewidth. The results are discussed in the context of quantum-information processing with Rydberg atoms.
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Li, Wenfang; Du, Jinjin; Wen, Ruijuan; Yang, Pengfei; Li, Gang; Zhang, Tiancai; Liang, Junjun
2014-03-17
We investigate the transmission of single-atom transits based on a strongly coupled cavity quantum electrodynamics system. By superposing the transit transmissions of a considerable number of atoms, we obtain the absorption spectra of the cavity induced by single atoms and obtain the temperature of the cold atom. The number of atoms passing through the microcavity for each release is also counted, and this number changes exponentially along with the atom temperature. Monte Carlo simulations agree closely with the experimental results, and the initial temperature of the cold atom is determined. Compared with the conventional time-of-flight (TOF) method, this approach avoids some uncertainties in the standard TOF and sheds new light on determining temperature of cold atoms by counting atoms individually in a confined space.
Baba, Justin S; Koju, Vijay; John, Dwayne O
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Qiang, J.
2009-10-17
In this paper, we report on study of ion back bombardment in a high average current radio-frequency (RF) photo-gun using a particle-in-cell/Monte Carlo simulation method. Using this method, we systematically studied effects of gas pressure, RF frequency, RF initial phase, electric field profile, magnetic field, laser repetition rate, different ion species on ion particle line density distribution, kinetic energy spectrum, and ion power line density distribution back bombardment onto the photocathode. Those simulation results suggested that effects of ion back bombardment could increase linearly with the background gas pressure and laser repetition rate. The RF frequency has significantly affected the ion motion inside the gun so that the ion power deposition on the photocathode in an RF gun can be several orders of magnitude lower than that in a DC gun. The ion back bombardment can be minimized by appropriately choosing the electric field profile and the initial phase.
Astrakharchik, G. E.; Boronat, J.; Casulleras, J.; Kurbakov, I. L.; Lozovik, Yu. E.
2009-05-15
The equation of state of a weakly interacting two-dimensional Bose gas is studied at zero temperature by means of quantum Monte Carlo methods. Going down to as low densities as na{sup 2}{proportional_to}10{sup -100} permits us to obtain agreement on beyond mean-field level between predictions of perturbative methods and direct many-body numerical simulation, thus providing an answer to the fundamental question of the equation of state of a two-dimensional dilute Bose gas in the universal regime (i.e., entirely described by the gas parameter na{sup 2}). We also show that the measure of the frequency of a breathing collective oscillation in a trap at very low densities can be used to test the universal equation of state of a two-dimensional Bose gas.
Hu, Z. M.; Xie, X. F.; Chen, Z. J.; Peng, X. Y.; Du, T. F.; Cui, Z. Q.; Ge, L. J.; Li, T.; Yuan, X.; Zhang, X.; Li, X. Q.; Zhang, G. H.; Chen, J. X.; Fan, T. S.; Hu, L. Q.; Zhong, G. Q.; Lin, S. Y.; Wan, B. N.; Gorini, G.
2014-11-15
To assess the neutron energy spectra and the neutron dose for different positions around the Experimental Advanced Superconducting Tokamak (EAST) device, a Bonner Sphere Spectrometer (BSS) was developed at Peking University, with totally nine polyethylene spheres and a SP9 {sup 3}He counter. The response functions of the BSS were calculated by the Monte Carlo codes MCNP and GEANT4 with dedicated models, and good agreement was found between these two codes. A feasibility study was carried out with a simulated neutron energy spectrum around EAST, and the simulated “experimental” result of each sphere was obtained by calculating the response with MCNP, which used the simulated neutron energy spectrum as the input spectrum. With the deconvolution of the “experimental” measurement, the neutron energy spectrum was retrieved and compared with the preset one. Good consistence was found which offers confidence for the application of the BSS system for dose and spectrum measurements around a fusion device.
Hardin, M; Elson, H; Lamba, M; Wolf, E; Warnick, R
2014-06-01
Purpose: To quantify the clinically observed dose enhancement adjacent to cranial titanium fixation plates during post-operative radiotherapy. Methods: Irradiation of a titanium burr hole cover was simulated using Monte Carlo code MCNPX for a 6 MV photon spectrum to investigate backscatter dose enhancement due to increased production of secondary electrons within the titanium plate. The simulated plate was placed 3 mm deep in a water phantom, and dose deposition was tallied for 0.2 mm thick cells adjacent to the entrance and exit sides of the plate. These results were compared to a simulation excluding the presence of the titanium to calculate relative dose enhancement on the entrance and exit sides of the plate. To verify simulated results, two titanium burr hole covers (Synthes, Inc. and Biomet, Inc.) were irradiated with 6 MV photons in a solid water phantom containing GafChromic MD-55 film. The phantom was irradiated on a Varian 21EX linear accelerator at multiple gantry angles (0–180 degrees) to analyze the angular dependence of the backscattered radiation. Relative dose enhancement was quantified using computer software. Results: Monte Carlo simulations indicate a relative difference of 26.4% and 7.1% on the entrance and exit sides of the plate respectively. Film dosimetry results using a similar geometry indicate a relative difference of 13% and -10% on the entrance and exit sides of the plate respectively. Relative dose enhancement on the entrance side of the plate decreased with increasing gantry angle from 0 to 180 degrees. Conclusion: Film and simulation results demonstrate an increase in dose to structures immediately adjacent to cranial titanium fixation plates. Increased beam obliquity has shown to alleviate dose enhancement to some extent. These results are consistent with clinically observed effects.
Souris, K; Lee, J; Sterpin, E
2014-06-15
Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time constraints. It has been successfully validated with Geant4. This work has been financialy supported by InVivoIGT, a public/private partnership between UCL and IBA.
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry for validation.
Sheu, R; Tseng, T; Powers, A; Lo, Y
2014-06-01
Purpose: To provide commissioning and acceptance test data of the Varian Eclipse electron Monte Carlo model (eMC v.11) for TrueBeam linac. We also investigated the uncertainties in beam model parameters and dose calculation results for different geometric configurations. Methods: For beam commissioning, PTW CC13 thimble chamber and IBA Blue Phantom2 were used to collect PDD and dose profiles in air. Cone factors were measured with a parallel plate chamber (PTW N23342) in solid water. GafChromic EBT3 films were used for dose calculation verifications to compare with parallel plate chamber results in the following test geometries: oblique incident, extended distance, small cutouts, elongated cutouts, irregular surface, and heterogeneous layers. Results: Four electron energies (6e, 9e, 12e, and 15e) and five cones (66, 1010, 1515, 2020, and 2525) with standard cutouts were calculated for different grid sizes (1, 1.5,2, and 2.5 mm) and compared with chamber measurements. The results showed calculations performed with a coarse grid size underestimated the absolute dose. The underestimation decreased as energy increased. For 6e, the underestimation (max 3.3 %) was greater than the statistical uncertainty level (3%) and was systematically observed for all cone sizes. By using a 1mm grid size, all the calculation results agreed with measurements within 5% for all test configurations. The calculations took 21s and 46s for 6e and 15e (2.5mm grid size) respectively distributed on 4 calculation servants. Conclusion: In general, commissioning the eMC dose calculation model on TrueBeam is straightforward and thedose calculation is in good agreement with measurements for all test cases. Monte Carlo dose calculation provides more accurate results which improves treatment planning quality. However, the normal acceptable grid size (2.5mm) would cause systematic underestimation in absolute dose calculation for lower energies, such as 6e. Users need to be cautious in this situation.
Arabi, Hosein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib
2011-03-15
Purpose: The variable resolution x-ray (VRX) CT scanner provides substantial improvement in the spatial resolution by matching the scanner's field of view (FOV) to the size of the object being imaged. Intercell x-ray cross-talk is one of the most important factors limiting the spatial resolution of the VRX detector. In this work, a new cell arrangement in the VRX detector is suggested to decrease the intercell x-ray cross-talk. The idea is to orient the detector cells toward the opening end of the detector. Methods: Monte Carlo simulations were used for performance assessment of the oriented cell detector design. Previously published design parameters and simulation results of x-ray cross-talk for the VRX detector were used for model validation using the GATE Monte Carlo package. In the first step, the intercell x-ray cross-talk of the actual VRX detector model was calculated as a function of the FOV. The obtained results indicated an optimum cell orientation angle of 28 deg. to minimize the x-ray cross-talk in the VRX detector. Thereafter, the intercell x-ray cross-talk in the oriented cell detector was modeled and quantified. Results: The intercell x-ray cross-talk in the actual detector model was considerably high, reaching up to 12% at FOVs from 24 to 38 cm. The x-ray cross-talk in the oriented cell detector was less than 5% for all possible FOVs, except 40 cm (maximum FOV). The oriented cell detector could provide considerable decrease in the intercell x-ray cross-talk for the VRX detector, thus leading to significant improvement in the spatial resolution and reduction in the spatial resolution nonuniformity across the detector length. Conclusions: The proposed oriented cell detector is the first dedicated detector design for the VRX CT scanners. Application of this concept to multislice and flat-panel VRX detectors would also result in higher spatial resolution.
Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan Jason; Cho, Young-Bin; Kim, Sun Mo; DiBiase, Steven
2012-12-15
Purpose: Fast and accurate transit portal dosimetry was investigated by developing a density-scaled layer model of electronic portal imaging device (EPID) and applying it to a clinical environment. Methods: The model was developed for fast Monte Carlo dose calculation. The model was validated through comparison with measurements of dose on EPID using first open beams of varying field sizes under a 20-cm-thick flat phantom. After this basic validation, the model was further tested by applying it to transit dosimetry and dose reconstruction that employed our predetermined dose-response-based algorithm developed earlier. The application employed clinical intensity-modulated beams irradiated on a Rando phantom. The clinical beams were obtained through planning on pelvic regions of the Rando phantom simulating prostate and large pelvis intensity modulated radiation therapy. To enhance agreement between calculations and measurements of dose near penumbral regions, convolution conversion of acquired EPID images was alternatively used. In addition, thickness-dependent image-to-dose calibration factors were generated through measurements of image and calculations of dose in EPID through flat phantoms of various thicknesses. The factors were used to convert acquired images in EPID into dose. Results: For open beam measurements, the model showed agreement with measurements in dose difference better than 2% across open fields. For tests with a Rando phantom, the transit dosimetry measurements were compared with forwardly calculated doses in EPID showing gamma pass rates between 90.8% and 98.8% given 4.5 mm distance-to-agreement (DTA) and 3% dose difference (DD) for all individual beams tried in this study. The reconstructed dose in the phantom was compared with forwardly calculated doses showing pass rates between 93.3% and 100% in isocentric perpendicular planes to the beam direction given 3 mm DTA and 3% DD for all beams. On isocentric axial planes, the pass rates varied between 95.8% and 99.9% for all individual beams and they were 98.2% and 99.9% for the composite beams of the small and large pelvis cases, respectively. Three-dimensional gamma pass rates were 99.0% and 96.4% for the small and large pelvis cases, respectively. Conclusions: The layer model of EPID built for Monte Carlo calculations offered fast (less than 1 min) and accurate calculation for transit dosimety and dose reconstruction.
Pastore, S.; Wiringa, Robert B.; Pieper, Steven C.; Schiavilla, Rocco
2014-08-01
We report quantum Monte Carlo calculations of electromagnetic transitions in $^8$Be. The realistic Argonne $v_{18}$ two-nucleon and Illinois-7 three-nucleon potentials are used to generate the ground state and nine excited states, with energies that are in excellent agreement with experiment. A dozen $M1$ and eight $E2$ transition matrix elements between these states are then evaluated. The $E2$ matrix elements are computed only in impulse approximation, with those transitions from broad resonant states requiring special treatment. The $M1$ matrix elements include two-body meson-exchange currents derived from chiral effective field theory, which typically contribute 20--30\\% of the total expectation value. Many of the transitions are between isospin-mixed states; the calculations are performed for isospin-pure states and then combined with the empirical mixing coefficients to compare to experiment. In general, we find that transitions between states that have the same dominant spatial symmetry are in decent agreement with experiment, but those transitions between different spatial symmetries are often significantly underpredicted.
Kadoura, Ahmad; Sun, Shuyu Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Chen Zhaoquan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Ye Qiubo [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Communications Research Centre, 3701 Carling Ave., Ottawa K2H 8S2 (Canada); Xia Guangqing [State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian, Liaoning 116024 (China); Hong Lingli; Hu Yelin; Zheng Xiaoliang; Li Ping [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); Zhou Qiyan [College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, Anhui 232001 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Hu Xiwei; Liu Minghai [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)
2013-03-15
Although surface-wave plasma (SWP) sources have many industrial applications, the ionization process for SWP discharges is not yet well understood. The resonant excitation of surface plasmon polaritons (SPPs) has recently been proposed to produce SWP efficiently, and this work presents a numerical study of the mechanism to produce SWP sources. Specifically, SWP resonantly excited by SPPs at low pressure (0.25 Torr) are modeled using a two-dimensional in the working space and three-dimensional in the velocity space particle-in-cell with the Monte Carlo collision method. Simulation results are sampled at different time steps, in which the detailed information about the distribution of electrons and electromagnetic fields is obtained. Results show that the mode conversion between surface waves of SPPs and electron plasma waves (EPWs) occurs efficiently at the location where the plasma density is higher than 3.57 Multiplication-Sign 10{sup 17} m{sup -3}. Due to the effect of the locally enhanced electric field of SPPs, the mode conversion between the surface waves of SPPs and EPWs is very strong, which plays a significant role in efficiently heating SWP to the overdense state.
Fan, Yu; Zou, Ying; Sun, Jizhong; Wang, Dezhen [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China)] [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Stirner, Thomas [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)] [Department of Electronic Engineering, University of Applied Sciences Deggendorf, Edlmairstr. 6-8, D-94469 Deggendorf (Germany)
2013-10-15
The influence of an applied magnetic field on plasma-related devices has a wide range of applications. Its effects on a plasma have been studied for years; however, there are still many issues that are not understood well. This paper reports a detailed kinetic study with the two-dimension-in-space and three-dimension-in-velocity particle-in-cell plus Monte Carlo collision method on the role of EB drift in a capacitive argon discharge, similar to the experiment of You et al.[Thin Solid Films 519, 6981 (2011)]. The parameters chosen in the present study for the external magnetic field are in a range common to many applications. Two basic configurations of the magnetic field are analyzed in detail: the magnetic field direction parallel to the electrode with or without a gradient. With an extensive parametric study, we give detailed influences of the drift on the collective behaviors of the plasma along a two-dimensional domain, which cannot be represented by a 1 spatial and 3 velocity dimensions model. By analyzing the results of the simulations, the occurring collisionless heating mechanism is explained well.
Cox, Stephen J.; Michaelides, Angelos; Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ ; Towler, Michael D.; Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE ; Alf, Dario; Department of Earth Sciences, University College London Gower Street, London WC1E 6BT
2014-05-07
High quality reference data from diffusion Monte Carlo calculations are presented for bulk sI methane hydrate, a complex crystal exhibiting both hydrogen-bond and dispersion dominated interactions. The performance of some commonly used exchange-correlation functionals and all-atom point charge force fields is evaluated. Our results show that none of the exchange-correlation functionals tested are sufficient to describe both the energetics and the structure of methane hydrate accurately, while the point charge force fields perform badly in their description of the cohesive energy but fair well for the dissociation energetics. By comparing to ice I{sub h}, we show that a good prediction of the volume and cohesive energies for the hydrate relies primarily on an accurate description of the hydrogen bonded water framework, but that to correctly predict stability of the hydrate with respect to dissociation to ice I{sub h} and methane gas, accuracy in the water-methane interaction is also required. Our results highlight the difficulty that density functional theory faces in describing both the hydrogen bonded water framework and the dispersion bound methane.
Heinisch, Howard L.; Singh, Bachu N.
2003-03-01
Within the last decade molecular dynamics simulations of displacement cascades have revealed that glissile clusters of self-interstitial crowdions are formed directly in cascades. Also, under various conditions, a crowdion cluster can change its Burgers vector and glide along a different close-packed direction. In order to incorporate the migration properties of crowdion clusters into analytical rate theory models, it is necessary to describe the reaction kinetics of defects that migrate one-dimensionally with occasional changes in their Burgers vector. To meet this requirement, atomic-scale kinetic Monte Carlo (KMC) simulations have been used to study the defect reaction kinetics of one-dimensionally migrating crowdion clusters as a function of the frequency of direction changes, specifically to determine the sink strengths for such one-dimensionally migrating defects. The KMC experiments are used to guide the development of analytical expressions for use in reaction rate theories and especially to test their validity. Excellent agreement is found between the results of KMC experiments and the analytical expressions derived for the transition from one-dimensional to three-dimensional reaction kinetics. Furthermore, KMC simulations have been performed to investigate the significant role of crowdion clusters in the formation and stability of void lattices. The necessity for both one-dimensional migration and Burgers vectors changes for achieving a stable void lattice is demonstrated.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Kyriakou, Ioanna; Emfietzoglou, Dimitris; Nojeh, Alireza; Moscovitch, Marko
2013-02-28
A systematic study of electron-beam penetration and backscattering in multi-walled carbon nanotube (MWCNT) materials for beam energies of {approx}0.3 to 30 keV is presented based on event-by-event Monte Carlo simulation of electron trajectories using state-of-the-art scattering cross sections. The importance of different analytic approximations for computing the elastic and inelastic electron-scattering cross sections for MWCNTs is emphasized. We offer a simple parameterization for the total and differential elastic-scattering Mott cross section, using appropriate modifications to the Browning formula and the Thomas-Fermi screening parameter. A discrete-energy-loss approach to inelastic scattering based on dielectric theory is adopted using different descriptions of the differential cross section. The sensitivity of electron penetration and backscattering parameters to the underlying scattering models is examined. Our simulations confirm the recent experimental backscattering data on MWCNT forests and, in particular, the steep increase of the backscattering yield at sub-keV energies as well as the sidewalls escape effect at high-beam energies.
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
Mei, Donghai; Neurock, Matthew; Smith, C Michael
2009-10-22
The kinetics for the selective hydrogenation of acetylene-ethylene mixtures over model Pd(111) and bimetallic Pd-Ag alloy surfaces were examined using first principles based kinetic Monte Carlo (KMC) simulations to elucidate the effects of alloying as well as process conditions (temperature and hydrogen partial pressure). The mechanisms that control the selective and unselective routes which included hydrogenation, dehydrogenation and C-?C bond breaking pathways were analyzed using first-principle density functional theory (DFT) calculations. The results were used to construct an intrinsic kinetic database that was used in a variable time step kinetic Monte Carlo simulation to follow the kinetics and the molecular transformations in the selective hydrogenation of acetylene-ethylene feeds over Pd and Pd-Ag surfaces. The lateral interactions between coadsorbates that occur through-surface and through-space were estimated using DFT-parameterized bond order conservation and van der Waal interaction models respectively. The simulation results show that the rate of acetylene hydrogenation as well as the ethylene selectivity increase with temperature over both the Pd(111) and the Pd-Ag/Pd(111) alloy surfaces. The selective hydrogenation of acetylene to ethylene proceeds via the formation of a vinyl intermediate. The unselective formation of ethane is the result of the over-hydrogenation of ethylene as well as over-hydrogenation of vinyl to form ethylidene. Ethylidene further hydrogenates to form ethane and dehydrogenates to form ethylidyne. While ethylidyne is not reactive, it can block adsorption sites which limit the availability of hydrogen on the surface and thus act to enhance the selectivity. Alloying Ag into the Pd surface decreases the overall rated but increases the ethylene selectivity significantly by promoting the selective hydrogenation of vinyl to ethylene and concomitantly suppressing the unselective path involving the hydrogenation of vinyl to ethylidene and the dehydrogenation ethylidene to ethylidyne. This is consistent with experimental results which suggest only the predominant hydrogenation path involving the sequential addition of hydrogen to form vinyl and ethylene exists over the Pd-Ag alloys. Ag enhances the desorption of ethylene and hydrogen from the surface thus limiting their ability to undergo subsequent reactions. The simulated apparent activation barriers were calculated to be 32-44 kJ/mol on Pd(111) and 26-31 kJ/mol on Pd-Ag/Pd(111) respectively. The reaction was found to be essentially first order in hydrogen over Pd(111) and Pd-Ag/Pd(111) surfaces. The results reveal that increases in the hydrogen partial pressure increase the activity but decrease ethylene selectivity over both Pd and Pd-Ag/Pd(111) surfaces. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Tesfamicael, B; Gueye, P; Lyons, D; Avery, S; Mahesh, M
2014-06-01
Purpose: To monitor the secondary dose distribution originating from a water phantom during proton therapy of prostate cancer using scintillating fibers. Methods: The Geant4 Monte Carlo toolkit version 9.6.p02 was used to simulate prostate cancer proton therapy based treatments. Two cases were studied. In the first case, 8 × 8 = 64 equally spaced fibers inside three 4 × 4 × 2.54 cmm{sup 3} DuPont™ Delrin blocks were used to monitor the emission of secondary particles in the transverse (left and right) and distal regions relative to the beam direction. In the second case, a scintillating block with a thickness of 2.54 cm and equal vertical and longitudinal dimensions as the water phantom was used. Geometrical cuts were used to extract the energy deposited in each fiber and the scintillating block. Results: The transverse dose distributions from secondary particles in both cases agree within <5% and with a very good symmetry. The energy deposited not only gradually increases as one moves from the peripheral row fibers towards the center of the block (aligned with the center of the prostate) but also decreases as one goes from the frontal to distal region of the block. The ratio of the doses from the prostate to the ones in the middle two rows of fibers showed a linear relationship with a slope (−3.55±2.26) × 10−5 MeV per treatment Gy. The distal detectors recorded a very small energy deposited due to water attenuation. Conclusion: With a good calibration and the ability to define a good correlation between the dose to the external fibers and the prostate, such fibers can be used for real time dose verification to the target.
Farah, J; Bonfrate, A; Donadille, L; Dubourg, N; Lacoste, V; Martinetti, F; Sayah, R; Trompier, F; Clairand, I [IRSN - Institute for Radiological Protection and Nuclear Safety, Fontenay-aux-roses (France); Caresana, M [Politecnico di Milano, Milano (Italy); Delacroix, S; Nauraye, C [Institut Curie - Centre de Protontherapie d Orsay, Orsay (France); Herault, J [Centre Antoine Lacassagne, Nice (France); Piau, S; Vabre, I [Institut de Physique Nucleaire d Orsay, Orsay (France)
2014-06-01
Purpose: Measure stray radiation inside a passive scattering proton therapy facility, compare values to Monte Carlo (MC) simulations and identify the actual needs and challenges. Methods: Measurements and MC simulations were considered to acknowledge neutron exposure associated with 75 MeV ocular or 180 MeV intracranial passively scattered proton treatments. First, using a specifically-designed high sensitivity Bonner Sphere system, neutron spectra were measured at different positions inside the treatment rooms. Next, measurement-based mapping of neutron ambient dose equivalent was fulfilled using several TEPCs and rem-meters. Finally, photon and neutron organ doses were measured using TLDs, RPLs and PADCs set inside anthropomorphic phantoms (Rando, 1 and 5-years-old CIRS). All measurements were also simulated with MCNPX to investigate the efficiency of MC models in predicting stray neutrons considering different nuclear cross sections and models. Results: Knowledge of the neutron fluence and energy distribution inside a proton therapy room is critical for stray radiation dosimetry. However, as spectrometry unfolding is initiated using a MC guess spectrum and suffers from algorithmic limits a 20% spectrometry uncertainty is expected. H*(10) mapping with TEPCs and rem-meters showed a good agreement between the detectors. Differences within measurement uncertainty (1015%) were observed and are inherent to the energy, fluence and directional response of each detector. For a typical ocular and intracranial treatment respectively, neutron doses outside the clinical target volume of 0.4 and 11 mGy were measured inside the Rando phantom. Photon doses were 210 times lower depending on organs position. High uncertainties (40%) are inherent to TLDs and PADCs measurements due to the need for neutron spectra at detector position. Finally, stray neutrons prediction with MC simulations proved to be extremely dependent on proton beam energy and the used nuclear models and cross sections. Conclusion: This work highlights measurement and simulation limits for ion therapy radiation protection applications.
Liu, T; Du, X; Su, L; Gao, Y; Ji, W; Xu, X; Zhang, D; Shi, J; Liu, B; Kalra, M
2014-06-15
Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 105?0.6CT and 106?0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. For the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.
Vazquez Quino, L; Calvo, O; Huerta, C; DeWeese, M
2014-06-01
Purpose: To study the perturbation due to the use of a novel Reference Ion Chamber designed to measure small field dosimetry (KermaX Plus C by IBA). Methods: Using the Phase-space files for TrueBeam photon beams available by Varian in IAEA-compliant format for 6 and 15 MV. Monte Carlo simulations were performed using BEAMnrc and DOSXYZnrc to investigate the perturbation introduced by a reference chamber into the PDDs and profiles measured in water tank. Field sizes ranging from 11, 22,33, 55 cm2 were simulated for both energies with and without a 0.5 mm foil of Aluminum which is equivalent to the attenuation equivalent of the reference chamber specifications in a water phantom of 303030 cm3 and a pixel resolution of 2 mm. The PDDs, profiles, and gamma analysis of the simulations were performed as well as a energy spectrum analysis of the phase-space files generated during the simulation. Results: Examination of the energy spectrum analysis performed shown a very small increment of the energy spectrum at the build-up region but no difference is appreciated after dmax. The PDD, profiles and gamma analysis had shown a very good agreement among the simulations with and without the Al foil, with a gamma analysis with a criterion of 2% and 2mm resulting in 99.9% of the points passing this criterion. Conclusion: This work indicates the potential benefits of using the KermaX Plus C as reference chamber in the measurement of PDD and Profiles for small fields since the perturbation due to in the presence of the chamber the perturbation is minimal and the chamber can be considered transparent to the photon beam.
Dupuy, Nicolas; Bouaouli, Samira; Mauri, Francesco Casula, Michele; Sorella, Sandro
2015-06-07
We study the ionization energy, electron affinity, and the ? ? ?{sup ?} ({sup 1}L{sub a}) excitation energy of the anthracene molecule, by means of variational quantum Monte Carlo (QMC) methods based on a Jastrow correlated antisymmetrized geminal power (JAGP) wave function, developed on molecular orbitals (MOs). The MO-based JAGP ansatz allows one to rigorously treat electron transitions, such as the HOMO ? LUMO one, which underlies the {sup 1}L{sub a} excited state. We present a QMC optimization scheme able to preserve the rank of the antisymmetrized geminal power matrix, thanks to a constrained minimization with projectors built upon symmetry selected MOs. We show that this approach leads to stable energy minimization and geometry relaxation of both ground and excited states, performed consistently within the correlated QMC framework. Geometry optimization of excited states is needed to make a reliable and direct comparison with experimental adiabatic excitation energies. This is particularly important in ?-conjugated and polycyclic aromatic hydrocarbons, where there is a strong interplay between low-lying energy excitations and structural modifications, playing a functional role in many photochemical processes. Anthracene is an ideal benchmark to test these effects. Its geometry relaxation energies upon electron excitation are of up to 0.3 eV in the neutral {sup 1}L{sub a} excited state, while they are of the order of 0.1 eV in electron addition and removal processes. Significant modifications of the ground state bond length alternation are revealed in the QMC excited state geometry optimizations. Our QMC study yields benchmark results for both geometries and energies, with values below chemical accuracy if compared to experiments, once zero point energy effects are taken into account.
EMAM, M; Eldib, A; Lin, M; Li, J; Chibani, O; Ma, C
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systems (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Ahmad, I.; Back, B.B.; Betts, R.R.
1995-08-01
An essential component in the assessment of the significance of the results from APEX is a demonstrated understanding of the acceptance and response of the apparatus. This requires detailed simulations which can be compared to the results of various source and in-beam measurements. These simulations were carried out using the computer codes EGS and GEANT, both specifically designed for this purpose. As far as is possible, all details of the geometry of APEX were included. We compared the results of these simulations with measurements using electron conversion sources, positron sources and pair sources. The overall agreement is quite acceptable and some of the details are still being worked on. The simulation codes were also used to compare the results of measurements of in-beam positron and conversion electrons with expectations based on known physics or other methods. Again, satisfactory agreement is achieved. We are currently working on the simulation of various pair-producing scenarios such as the decay of a neutral object in the mass range 1.5-2.0 MeV and also the emission of internal pairs from nuclear transitions in the colliding ions. These results are essential input to the final results from APEX on cross section limits for various, previously proposed, sharp-line producing scenarios.
Hot Links to Cool Spots - Hanford Site
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
National Highway Traffic Safety Administration U.S. Fire Administration Federal Emergency Management Agency U.S. Department of Energy Wildland Fire Assessment System - USDA Forest...
Waist parameter determination from measured spot sizes
Hajek, M. )
1989-12-15
A novel simple method of determination of waist parameters of a Gaussian laser beam as a consequence of geometric treatment of the problem is introduced. The method does not require any least-squares process, ordering of experimental data, or estimates of waist parameters.
Dynamic Characterization of Spot Welds for AHSS
Broader source: Energy.gov [DOE]
2010 DOE Vehicle Technologies and Hydrogen Programs Annual Merit Review and Peer Evaluation Meeting, June 7-11, 2010 -- Washington D.C.
Recent Trends in Natural Gas Spot Prices
Reports and Publications (EIA)
1997-01-01
This article focuses primarily on conditions and developments in the East Consuming Region and their connection to prices at the Henry Hub in the Producing Region.
Spot test kit for explosives detection
Pagoria, Philip F; Whipple, Richard E; Nunes, Peter J; Eckels, Joel Del; Reynolds, John G; Miles, Robin R; Chiarappa-Zucca, Marina L
2014-03-11
An explosion tester system comprising a body, a lateral flow membrane swab unit adapted to be removeably connected to the body, a first explosives detecting reagent, a first reagent holder and dispenser operatively connected to the body, the first reagent holder and dispenser containing the first explosives detecting reagent and positioned to deliver the first explosives detecting reagent to the lateral flow membrane swab unit when the lateral flow membrane swab unit is connected to the body, a second explosives detecting reagent, and a second reagent holder and dispenser operatively connected to the body, the second reagent holder and dispenser containing the second explosives detecting reagent and positioned to deliver the second explosives detecting reagent to the lateral flow membrane swab unit when the lateral flow membrane swab unit is connected to the body.
Portsmouth Training Exercise Helps Radiological Trainees Spot...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
At hand is a challenging task by EM contractor Fluor-BWXT Portsmouth's (FBP) ... Addthis Related Articles Workers from Fluor-BWXT Portsmouth lower the last converter ...
Besemer, A; Bednarz, B; Titz, B; Grudzinski, J; Weichert, J; Hall, L
2014-06-01
Purpose: Combination targeted radionuclide therapy (TRT) is appealing because it can potentially exploit different mechanisms of action from multiple radionuclides as well as the variable dose rates due to the different radionuclide half-lives. The work describes the development of a multiobjective optimization algorithm to calculate the optimal ratio of radionuclide injection activities for delivery of combination TRT. Methods: The diapeutic (diagnostic and therapeutic) agent, CLR1404, was used as a proof-of-principle compound in this work. Isosteric iodine substitution in CLR1404 creates a molecular imaging agent when labeled with I-124 or a targeted radiotherapeutic agent when labeled with I-125 or I-131. PET/CT images of high grade glioma patients were acquired at 4.5, 24, and 48 hours post injection of 124I-CLR1404. The therapeutic 131I-CLR1404 and 125ICLR1404 absorbed dose (AD) and biological effective dose (BED) were calculated for each patient using a patient-specific Monte Carlo dosimetry platform. The optimal ratio of injection activities for each radionuclide was calculated with a multi-objective optimization algorithm using the weighted sum method. Objective functions such as the tumor dose heterogeneity and the ratio of the normal tissue to tumor doses were minimized and the relative importance weights of each optimization function were varied. Results: For each optimization function, the program outputs a Pareto surface map representing all possible combinations of radionuclide injection activities so that values that minimize the objective function can be visualized. A Pareto surface map of the weighted sum given a set of user-specified importance weights is also displayed. Additionally, the ratio of optimal injection activities as a function of the all possible importance weights is generated so that the user can select the optimal ratio based on the desired weights. Conclusion: Multi-objective optimization of radionuclide injection activities can provide an invaluable tool for maximizing the dosimetric benefits in multi-radionuclide combination TRT. BT, JG, and JW are affiliated with Cellectar Biosciences which owns the licensing rights to CLR1404 and related compounds.
Forbang, R Teboh
2014-06-01
Purpose: MultiPlan, the treatment planning system for the CyberKnife Robotic Radiosurgery system offers two approaches to dose computation, namely Ray-Tracing (RT), the default technique and Monte Carlo (MC), an option. RT is deterministic, however it accounts for primary heterogeneity only. MC on the other hand has an uncertainty associated with the calculation results. The advantage is that in addition, it accounts for heterogeneity effects on the scattered dose. Not all sites will benefit from MC. The goal of this work was to focus on central nervous system (CNS) tumors and compare dosimetrically, treatment plans computed with RT versus MC. Methods: Treatment plans were computed using both RT and MC for sites covering (a) the brain (b) C-spine (c) upper T-spine (d) lower T-spine (e) L-spine and (f) sacrum. RT was first used to compute clinically valid treatment plans. Then the same treatment parameters, monitor units, beam weights, etc., were used in the MC algorithm to compute the dose distribution. The plans were then compared for tumor coverage to illustrate the difference if any. All MC calculations were performed at a 1% uncertainty. Results: Using the RT technique, the tumor coverage for the brain, C-spine (C3–C7), upper T-spine (T4–T6), lower T-spine (T10), Lspine (L2) and sacrum were 96.8%, 93.1%, 97.2%, 87.3%, 91.1%, and 95.3%. The corresponding tumor coverage based on the MC approach was 98.2%, 95.3%, 87.55%, 88.2%, 92.5%, and 95.3%. It should be noted that the acceptable planning target coverage for our clinical practice is >95%. The coverage can be compromised for spine tumors to spare normal tissues such as the spinal cord. Conclusion: For treatment planning involving the CNS, RT and MC appear to be similar for most sites but for the T-spine area where most of the beams traverse lung tissue. In this case, MC is highly recommended.
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.
Teymurazyan, A.; Rowlands, J. A.; Thunder Bay Regional Research Institute , Thunder Bay P7A 7T1; Department of Radiation Oncology, University of Toronto, Toronto M5S 3E2 ; Pang, G.
2014-04-15
Purpose: Electronic Portal Imaging Devices (EPIDs) have been widely used in radiation therapy and are still needed on linear accelerators (Linacs) equipped with kilovoltage cone beam CT (kV-CBCT) or MRI systems. Our aim is to develop a new high quantum efficiency (QE) ?erenkov Portal Imaging Device (CPID) that is quantum noise limited at dose levels corresponding to a single Linac pulse. Methods: Recently a new concept of CPID for MV x-ray imaging in radiation therapy was introduced. It relies on ?erenkov effect for x-ray detection. The proposed design consisted of a matrix of optical fibers aligned with the incident x-rays and coupled to an active matrix flat panel imager (AMFPI) for image readout. A weakness of such design is that too few ?erenkov light photons reach the AMFPI for each incident x-ray and an AMFPI with an avalanche gain is required in order to overcome the readout noise for portal imaging application. In this work the authors propose to replace the optical fibers in the CPID with light guides without a cladding layer that are suspended in air. The air between the light guides takes on the role of the cladding layer found in a regular optical fiber. Since air has a significantly lower refractive index (?1 versus 1.38 in a typical cladding layer), a much superior light collection efficiency is achieved. Results: A Monte Carlo simulation of the new design has been conducted to investigate its feasibility. Detector quantities such as quantum efficiency (QE), spatial resolution (MTF), and frequency dependent detective quantum efficiency (DQE) have been evaluated. The detector signal and the quantum noise have been compared to the readout noise. Conclusions: Our studies show that the modified new CPID has a QE and DQE more than an order of magnitude greater than that of current clinical systems and yet a spatial resolution similar to that of current low-QE flat-panel based EPIDs. Furthermore it was demonstrated that the new CPID does not require an avalanche gain in the AMFPI and is quantum noise limited at dose levels corresponding to a single Linac pulse.
Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A
2004-03-11
Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke; Pignol, Jean-Philippe; Lechtman, Eli; Reilly, Raymond M.; Department of Medical Imaging, University of Toronto, Toronto, Ontario M5S 3E2; Toronto General Research Institute, University Health Network, Toronto, Ontario M5G 2C4
2013-11-15
Purpose: The authors aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2) using 30 nm AuNPs, would require 5.1 0.2, 9 1, 10 1, 10 1 mg Au/g tissue in the nucleus, in the cytoplasm, on the cell surface, or in the extracellular space, respectively. Using 50 nm AuNPs, the required amount decreases to 3.1 0.3, 8 1, 9 1, 9 1 mg Au/g tissue, respectively.Conclusions: NDEF is a new metric that can predict the radiation enhancement of AuNPs for various experimental conditions. Cell model, the subcellular location and size of AuNPs, and the number of AuNPs per cell, as well as the x-ray photon energy all have effects on NDEFs. Larger AuNPs in the nucleus of cluster cells exposed to x-rays of 15 or 40 keV maximize NDEFs.
Zhou Hong; Boone, John M.
2008-06-15
Monte Carlo simulations were used to evaluate the radiation dose to infinitely long cylinders of water, polyethylene, and poly(methylmethacrylate) (PMMA) from 10 to 500 mm in diameter. Radiation doses were computed by simulating a 10 mm divergent primary beam striking the cylinder at z=0, and the scattered radiation in the -z and +z directions was integrated out to infinity. Doses were assessed using the total energy deposited divided by the mass of the 10-mm-thick volume of material in the primary beam. This approach is consistent with the notion of the computed tomography dose index (CTDI) integrated over infinite z, which is equivalent to the dose near the center of an infinitely long CT scan. Monoenergetic x-ray beams were studied from 5 to 140 keV, allowing polyenergetic x-ray spectra to be evaluated using a weighted average. The radiation dose for a 10-mm-thick CT slice was assessed at the center, edge, and over the entire diameter of the phantom. The geometry of a commercial CT scanner was simulated, and the computed results were in good agreement with measured doses. The absorbed dose in water for 120 kVp x-ray spectrum with no bow tie filter for a 50 mm cylinder diameter was about 1.2 mGy per mGy air kerma at isocenter for both the peripheral and center regions, and dropped to 0.84 mGy/mGy for a 500-mm-diam water phantom at the periphery, where the corresponding value for the center location was 0.19 mGy/mGy. The influence of phantom composition was studied. For a diameter of 100 mm, the dose coefficients were 1.23 for water, 1.02 for PMMA, and 0.94 for polyethylene (at 120 kVp). For larger diameter phantoms, the order changed--for a 400 mm phantom, the dose coefficient of polyethylene (0.25) was greater than water (0.21) and PMMA (0.16). The influence of the head and body bow tie filters was also studied. For the peripheral location, the dose coefficients when no bow tie filter was used were high (e.g., for a water phantom at 120 kVp at a diameter of 300 mm, the dose coefficient was 0.97). The body bow tie filter reduces this value to 0.62, and the head bow tie filter (which is not actually designed to be used for a 300 mm object) reduces the dose coefficient to 0.42. The dose in CT is delivered both by the absorption of primary and scattered x-ray photons, and at the center of a water cylinder the ratio of scatter to primary (SPR) doses increased steadily with cylinder diameter. For water, a 120 kVp spectrum and a cylinder diameter of 200 mm, the SPR was 4, and this value grew to 9 for a diameter of 350 mm and to over 16 for a 500-mm-diam cylinder. A freely available spreadsheet was developed to allow the computation of radiation dose as a function of object diameter (10-500 mm), composition (water, polyethylene, PMMA), and beam energy (10-140 keV, 40-140 kVp)
Ali, Imad; Ahmad, Salahuddin
2013-10-01
To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a systematic lack of dose coverage. The dose calculated by PB for lung tumors was overestimated by up to 40%. An interesting feature that was observed is that despite large discrepancies in dose-volume histogram coverage of the planning target volume between PB and MC, the point doses at the isocenter (center of the lesions) calculated by both algorithms were within 7% even for lung cases. The dose distributions measured with EBT GAFCHROMIC films in heterogeneous phantoms showed large discrepancies of nearly 15% lower than PB at interfaces between heterogeneous media, where these lower doses measured by the film were in agreement with those by MC. The doses (V95) calculated by MC and PB agreed within 5% for treatment sites with small tissue heterogeneities such as the prostate, brain, head and neck, and paraspinal tumors. Considerable discrepancies, up to 40%, were observed in the dose-volume coverage between MC and PB in lung tumors, which may affect clinical outcomes. The discrepancies between MC and PB increased for 15 MV compared with 6 MV indicating the importance of implementation of accurate clinical treatment planning such as MC. The comparison of point doses is not representative of the discrepancies in dose coverage and might be misleading in evaluating the accuracy of dose calculation between PB and MC. Thus, the clinical quality assurance procedures required to verify the accuracy of dose calculation using PB and MC need to consider measurements of 2- and 3-dimensional dose distributions rather than a single point measurement using heterogeneous phantoms instead of homogenous water-equivalent phantoms.