Broader source: Energy.gov [DOE]
Variance Fact Sheet. A variance is an exception to compliance with some part of a safety and health standard granted by the Department of Energy (DOE) to a contractor
Moster, Benjamin P.; Rix, Hans-Walter [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany); Somerville, Rachel S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Newman, Jeffrey A., E-mail: moster@mpia.de, E-mail: rix@mpia.de, E-mail: somerville@stsci.edu, E-mail: janewman@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara Street, Pittsburgh, PA 15260 (United States)
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.
Nuclear Material Variance Calculation
Energy Science and Technology Software Center (OSTI)
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Cosmology without cosmic variance
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Broader source: Energy.gov [DOE]
Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)
Broader source: Energy.gov [DOE]
Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)
T:\ClearanceEMEUConsumption\cbecs\pubuse86\txt\cb86sasfmt&layout.txt
U.S. Energy Information Administration (EIA) Indexed Site
6/txt/cb86sasfmt&layout.txt[3/17/2009 4:43:14 PM] File 1: Summary File (cb86f01.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. Metropolitan statistical area MSA3 25- 25 $MSA. Climate zone CLIMATE3 27- 27 $CLIMAT. B-1 Square footage SQFT3 29- 35
U.S. Energy Information Administration (EIA) Indexed Site
File 1: Summary File (cb86f01.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. Metropolitan statistical area MSA3 25- 25 $MSA. Climate zone CLIMATE3 27- 27 $CLIMAT. B-1 Square footage SQFT3 29- 35 COMMA14. B-2 Square footage SQFTC3 37- 38 $SQFTC.
U.S. Energy Information Administration (EIA) Indexed Site
File 2: Building Activity (cb86f02.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. B-3 Any residential use RESUSE3 28- 28 $YESNO. B-4 Percent residential RESPC3 30- 30 $RESPC. Principal building activity PBA3 32-
U.S. Energy Information Administration (EIA) Indexed Site
File 3: Operating Hours (cb86f03.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. Regular operating hours REGHRS3 31- 31 $YESNO. C-5 Monday thru Friday opening
U.S. Energy Information Administration (EIA) Indexed Site
File 4: Building Shell, Equipment, Energy Audits, and "Ohter" Conservation Features (cb86f04.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year
U.S. Energy Information Administration (EIA) Indexed Site
5: End Uses of Major Energy Sources (cb86f05.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC.
U.S. Energy Information Administration (EIA) Indexed Site
6: End Uses of Minor Energy Sources (cb86f06.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC.
U.S. Energy Information Administration (EIA) Indexed Site
File 7: HVAC, Lighting, and Building Shell Conservation Features (cb86f07.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed
U.S. Energy Information Administration (EIA) Indexed Site
File 9: Natural Gas and Fuel Oil (cb86f09.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC. Electricity
U.S. Energy Information Administration (EIA) Indexed Site
File10: District Steam and Hot Water (cb86f10.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC.
U.S. Energy Information Administration (EIA) Indexed Site
1: Propane and District Chilled Water (cb86f11.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC.
U.S. Energy Information Administration (EIA) Indexed Site
File12: Imputation Flags for Summary Data, Building Activity, Operating Hours, Shell and Equipment (cb86f12.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2
U.S. Energy Information Administration (EIA) Indexed Site
3: Imputation Flags for Energy Audits, "Other" Conservation Features, and End Uses (cb86f13.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year
U.S. Energy Information Administration (EIA) Indexed Site
4: Imputation Flags for HVAC, Lighting and Shell Conservation Features (cb86f14.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The #27;σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Hawaii Application for Community Noise Variance (DOH Form) |...
Application for Community Noise Variance Organization State of Hawaii Department of Health Published Publisher Not Provided, 072013 DOI Not Provided Check for DOI availability:...
A Hybrid Variance Reduction Method Based on Gaussian Process...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to accelerate the convergence of Monte Carlo (MC) simulation. Hybrid deterministic-MC methods 1, 2, 3 have been recently developed to achieve the goal of global variance...
Hawaii Variance from Pollution Control Permit Packet (Appendix...
Variance from Pollution Control Permit Packet (Appendix S-13) Jump to: navigation, search OpenEI Reference LibraryAdd to library PermittingRegulatory Guidance - Supplemental...
Hawaii Guide for Filing Community Noise Variance Applications...
Applications. State of Hawaii. Guide for Filing Community Noise Variance Applications. 4p. GuideHandbook sent to Retrieved from "http:en.openei.orgwindex.php?titleHawaiiGu...
Variance control in weak-value measurement pointers
Parks, A. D.; Gray, J. E.
2011-07-15
The variance of an arbitrary pointer observable is considered for the general case that a complex weak value is measured using a complex valued pointer state. For the typical cases where the pointer observable is either its position or momentum, the associated expressions for the pointer's variance after the measurement contain a term proportional to the product of the weak value's imaginary part with the rate of change of the third central moment of position relative to the initial pointer state just prior to the time of the measurement interaction when position is the observable--or with the initial pointer state's third central moment of momentum when momentum is the observable. These terms provide a means for controlling pointer position and momentum variance and identify control conditions which, when satisfied, can yield variances that are smaller after the measurement than they were before the measurement. Measurement sensitivities which are useful for estimating weak-value measurement accuracies are also briefly discussed.
A Clock Synchronization Strategy for Minimizing Clock Variance at Runtime
Office of Scientific and Technical Information (OSTI)
in High-end Computing Environments (Conference) | SciTech Connect A Clock Synchronization Strategy for Minimizing Clock Variance at Runtime in High-end Computing Environments Citation Details In-Document Search Title: A Clock Synchronization Strategy for Minimizing Clock Variance at Runtime in High-end Computing Environments We present a new software-based clock synchronization scheme designed to provide high precision time agreement among distributed memory nodes. The technique is designed
ARM - Publications: Science Team Meeting Documents: Variance similarity in
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
shallow cumulus topped mixed layers Variance similarity in shallow cumulus topped mixed layers Neggers, Roel ECMWF Stevens, Bjorn Department of Atmospheric and Oceanic Sciences, UCLA Neelin, David Department of Atmospheric and Oceanic Sciences, UCLA Thermodynamic variance similarity in shallow cumulus topped mixed layers is studied using large-eddy simulation (LES) results. The simulations are based on a range of different shallow cumulus cases, including marine steady state cumulus as well
Smoothing method aids gas-inventory variance trending
Mason, R.G. )
1992-03-23
This paper reports on a method for determining gas-storage inventory and variance in a natural-gas storage field which uses the equations developed to determine gas-in-place in a production field. The calculations use acquired data for shut-in pressures, reservoir pore volume, and storage gas properties. These calculations are then graphed and trends are developed. Evaluating trends in inventory variance can be enhanced by use of a technique, described here, that smooths the peaks and valleys of an inventory-variance curve. Calculations using the acquired data determine inventory for a storage field whose drive mechanism is gas expansion (that is, volumetric). When used for a dry gas, condensate, or gas-condensate reservoir, the formulas require no further modification. Inventory in depleted oil fields can be determined in this same manner, as well. Some additional calculations, however, must be made to assess the influence of oil production on the gas-storage process.
Reduction of Emission Variance by Intelligent Air Path Control | Department
Broader source: Energy.gov (indexed) [DOE]
of Energy This poster describes an air path control concept, which minimizes NOx and PM emission variance while having the ability to run reliably with many different sensor configurations. PDF icon p-17_nanjundaswamy.pdf More Documents & Publications Further improvement of conventional diesel NOx aftertreatment concepts as pathway for SULEV Future Directions in Engines and Fuels A Novel Approach in Determining Oil Dilution Level on a DPF Equipped Vehicle as a Result of Regeneratio
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Development of a treatability variance guidance document for US DOE mixed-waste streams
Scheuer, N.; Spikula, R. ); Harms, T. . Environmental Guidance Div.); Triplett, M.B. )
1990-03-01
In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs.
Verification of the history-score moment equations for weight-window variance reduction
Solomon, Clell J; Sood, Avneet; Booth, Thomas E; Shultis, J. Kenneth
2010-12-06
The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,
Technical criteria for an Area-Of-Review variance methodology. Appendix B
1994-01-01
This guidance was developed by the Underground Injection Practices Research Foundation to assist Underground Injection Control Directors in implementing proposed changes to EPA`s Class 2 Injection Well Regulations that will apply the Area-Of-Review (AOR) requirement to previously exempt wells. EPA plans to propose amendments this year consistent with the recommendations in the March 23, 1992, Final Document developed by the Class 2 Injection Well Advisory Committee, that will require AORs to be performed on all Class 2 injection wells except those covered by previously conducted AORs and those located in areas that have been granted a variance. Variances may be granted if the Director determines that there is a sufficiently low risk of upward fluid movement from the injection zone that could endanger underground sources of drinking water. This guidance contains suggested technical criteria for identifying areas eligible for an AOR variance. The suggested criteria were developed in consultation with interested States and representatives from EPA, industry and the academic community. Directors will have six months from the promulgation of the new regulations to provide EPA with either a schedule for performing AOR`s within five years on all wells not covered by previously conducted AORs, or notice of their intent to establish a variance program. It is believed this document will provide valuable assistance to Directors who are considering whether to establish a variance program or have begun early preparations to develop such a program.
An area-of-review variance study of the East Texas field
Warner, D.L.; Koederitz, L.F.; Laudon, R.C.; Dunn-Norman, S.
1996-12-31
The East Texas oil field, discovered in 1930 and located principally in Gregg and Rusk Counties, is the largest oil field in the conterminous United States. Nearly 33,000 wells are known to have been drilled in the field. The field has been undergoing water injection for pressure maintenance since 1938. As of today, 104 Class II salt-water disposal wells, operated by the East Texas Salt Water Disposal Company, are returning all produced water to the Woodbine producing reservoir. About 69 of the presently existing wells have not been subjected to US Environmental Protection Agency Area-of-Review (AOR) requirements. A study has been carried out of opportunities for variance from AORs for these existing wells and for new wells that will be constructed in the future. The study has been based upon a variance methodology developed at the University of Missouri-Rolla under sponsorship of the American Petroleum Institute and in coordination with the Ground Water Protection Council. The principal technical objective of the study was to determine if reservoir pressure in the Woodbine producing reservoir is sufficiently low so that flow of salt-water from the Woodbine into the Carrizo-Wilcox ground water aquifer is precluded. The study has shown that the Woodbine reservoir is currently underpressured relative to the Carrizo-Wilcox and will remain so over the next 20 years. This information provides a logical basis for a variance for the field from performing AORs.
Evaluation of area of review variance opportunities for the East Texas field. Annual report
Warner, D.L.; Koederitz, L.F.; Laudon, R.C.; Dunn-Norman, S.
1995-05-01
The East Texas oil field, discovered in 1930 and located principally in Gregg and Rusk Counties, is the largest oil field in the conterminous United States. Nearly 33,000 wells are known to have been drilled in the field. The field has been undergoing water injection for pressure maintenance since 1938. As of today, 104 Class II salt-water disposal wells, operated by the East Texas Salt Water Disposal Company, are returning all produced water to the Woodbine producing reservoir. About 69 of the presently existing wells have not been subjected to U.S. Environmental Protection Agency Area-of-Review (AOR) requirements. A study has been carried out of opportunities for variance from AORs for these existing wells and for new wells that will be constructed in the future. The study has been based upon a variance methodology developed at the University of Missouri-Rolla under sponsorship of the American Petroleum Institute and in coordination with the Ground Water Protection Council. The principal technical objective of the study was to determine if reservoir pressure in the Woodbine producing reservoir is sufficiently low so that flow of salt-water from the Woodbine into the Carrizo-Wilcox ground water aquifer is precluded. The study has shown that the Woodbine reservoir is currently underpressured relative to the Carrizo-Wilcox and will remain so over the next 20 years. This information provides a logical basis for a variance for the field from performing AORs.
Scheuer, N.; Spikula, R. ); Harms, T. . Environmental Guidance Div.); Triplett, M.B. )
1990-02-01
In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a guidance manual was prepared. The guidance manual is for use by DOE facilities and operations offices in obtaining variances from the RCRA LDR treatment standards. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA LDRs. The manual addresses treatability variances and equivalent treatment variances. A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. An equivalent treatment variance is granted by EPA that allows treatment of a restricted waste by a process that differs from that specified in the standards, but achieves a level of performance equivalent to the technology specified in the standard. 4 refs.
Waste Isolation Pilot Plant no-migration variance petition. Executive summary
Not Available
1990-12-31
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ``no-migration`` demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989.
Broader source: Energy.gov [DOE]
Request for Concurrence on Three Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory
No-migration variance petition: Draft. Volume 4, Appendices DIF, GAS, GCR (Volume 1)
1995-05-31
The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2.6 million cubic feet of the se waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 4 of the petition which presents details about the transport characteristics across drum filter vents and polymer bags; gas generation reactions and rates during long-term WIPP operation; and geological characterization of the WIPP site.
Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; McLeod, Derek
2014-12-01
Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ?35% at redshift z ? 7 to ? 65% at z ? 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.
Waste Isolation Pilot Plant No-Migration Variance Petition. Revision 1, Volume 1
Hunt, Arlen
1990-03-01
The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA`s NOD and met with the EPA`s reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0.
Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.
2012-01-01
Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.
Pichugina, Yelena L.; Banta, Robert M.; Kelley, Neil D.; Jonkman, Bonnie J.; Tucker, Sara C.; Newsom, Rob K.; Brewer, W. A.
2008-08-01
Quantitative data on turbulence variables aloft--above the region of the atmosphere conveniently measured from towers--has been an important but difficult measurement need for advancing understanding and modeling of the stable boundary layer (SBL). Vertical profiles of streamwise velocity variances obtained from NOAAs High Resolution Doppler Lidar (HRDL), which have been shown to be numerically equivalent to turbulence kinetic energy (TKE) for stable conditions, are a measure of the turbulence in the SBL. In the present study, the mean horizontal wind component U and variance ?u2 were computed from HRDL measurements of the line-of-sight (LOS) velocity using a technique described in Banta, et al. (2002). The technique was tested on datasets obtained during the Lamar Low-Level Jet Project (LLLJP) carried out in early September 2003, near the town of Lamar in southeastern Colorado. This paper compares U with mean wind speed obtained from sodar and sonic anemometer measurements. It then describes several series of averaging tests that produced the best correlation between TKE calculated from sonic anemometer data at several tower levels and lidar measurements of horizontal velocity variance ?u2. The results show high correlation (0.71-0.97) of the mean U and average wind speed measured by sodar and in-situ instruments, independent of sampling strategies and averaging procedures. Comparison of estimates of variance, on the other hand, proved sensitive to both the spatial and temporal averaging techniques.
No-migration variance petition for the Waste Isolation Pilot Plant
Carnes, R.G.; Hart, J.S. ); Knudtsen, K. )
1990-01-01
The Waste Isolation Pilot Plant (WIPP) is a US Department of Energy (DOE) project to provide a research and development facility to demonstrate the safe disposal of radioactive waste resulting from US defense activities and programs. The DOE is developing the WIPP facility as a deep geologic repository in bedded salt for transuranic (TRU) waste currently stored at or generated by DOE defense installations. Approximately 60 percent of the wastes proposed to be emplaced in the WIPP are radioactive mixed wastes. Because such mixed wastes contain a hazardous chemical component, the WIPP is subject to requirements of the Resource Conservation and Recovery Act (RCRA). In 1984 Congress amended the RCRA with passage of the Hazardous and Solid Waste Amendments (HSWA), which established a stringent regulatory program to prohibit the land disposal of hazardous waste unless (1) the waste is treated to meet treatment standards or other requirements established by the Environmental Protection Agency (EPA) under {section}3004(n), or (2) the EPA determines that compliance with the land disposal restrictions is not required in order to protect human health and the environment. The DOE WIPP Project Office has prepared and submitted to the EPA a no-migration variance petition for the WIPP facility. The purpose of the petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the WIPP facility for as long as the wastes remain hazardous. This paper provides an overview of the petition and describes the EPA review process, including key issues that have emerged during the review. 5 refs.
Broader source: Energy.gov [DOE]
Memorandum Request for Concurrence on firee Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory
Broader source: Energy.gov [DOE]
CH2M HG Idaho, LLC, Request for Variance to Title 10 Code of Federal Regulations part 851, "Worker Safety and Health"
Broader source: Energy.gov [DOE]
Memorandum CH2M WG Idaho, LLC, Request for Variance to Title 10, Code of Federal Regulations Part 851, "Worker Safety and Health Program"
Broader source: Energy.gov [DOE]
Approval of a Permanent Variance Regarding Sprinklers and Fire Boundaries in Selected Areas of 22 1 -H Canyon at the Savannah River Site
Broader source: Energy.gov [DOE]
Approval of a Permanent Variance Regarding Fire Safety in Selected Areas of 221-H Canyon at the Savannah River Site UNDER SECRETARY OF ENERGY
Pichugina, Y. L.; Banta, R. M.; Kelley, N. D.; Jonkman, B. J.; Tucker, S. C.; Newsom, R. K.; Brewer, W. A.
2008-08-01
Quantitative data on turbulence variables aloft--above the region of the atmosphere conveniently measured from towers--have been an important but difficult measurement need for advancing understanding and modeling of the stable boundary layer (SBL). Vertical profiles of streamwise velocity variances obtained from NOAA's high-resolution Doppler lidar (HRDL), which have been shown to be approximately equal to turbulence kinetic energy (TKE) for stable conditions, are a measure of the turbulence in the SBL. In the present study, the mean horizontal wind component U and variance {sigma}2u were computed from HRDL measurements of the line-of-sight (LOS) velocity using a method described by Banta et al., which uses an elevation (vertical slice) scanning technique. The method was tested on datasets obtained during the Lamar Low-Level Jet Project (LLLJP) carried out in early September 2003, near the town of Lamar in southeastern Colorado. This paper compares U with mean wind speed obtained from sodar and sonic anemometer measurements. The results for the mean U and mean wind speed measured by sodar and in situ instruments for all nights of LLLJP show high correlation (0.71-0.97), independent of sampling strategies and averaging procedures, and correlation coefficients consistently >0.9 for four high-wind nights, when the low-level jet speeds exceeded 15 m s{sup -1} at some time during the night. Comparison of estimates of variance, on the other hand, proved sensitive to both the spatial and temporal averaging parameters. Several series of averaging tests are described, to find the best correlation between TKE calculated from sonic anemometer data at several tower levels and lidar measurements of horizontal-velocity variance {sigma}{sup 2}{sub u}. Because of the nonstationarity of the SBL data, the best results were obtained when the velocity data were first averaged over intervals of 1 min, and then further averaged over 3-15 consecutive 1-min intervals, with best results for the 10- and 15-min averaging periods. For these cases, correlation coefficients exceeded 0.9. As a part of the analysis, Eulerian integral time scales ({tau}) were estimated for the four high-wind nights. Time series of {tau} through each night indicated erratic behavior consistent with the nonstationarity. Histograms of {tau} showed a mode at 4-5 s, but frequent occurrences of larger {tau} values, mostly between 10 and 100 s.
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.
Occupational Medicine Variance Request
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Laboratory | Department of Energy Medical Surveillance System (OMSS) PIA, Idaho National Laboratory Occupational Medical Surveillance System (OMSS) PIA, Idaho National Laboratory Occupational Medical Surveillance System (OMSS) PIA, Idaho National Laboratory PDF icon Occupational Medical Surveillance System (OMSS) PIA, Idaho National Laboratory More Documents & Publications Occupational Medicine - Assistant PIA, Idaho National Laboratory Occupational Injury & Illness System
SWS Variance Request Form | Department of Energy
Department of Energy Online Tool now includes Multifamily Content, plus a How-To Webinar SWS Online Tool now includes Multifamily Content, plus a How-To Webinar This announcement contains information on the integration of multifamily content in the SWS Online Tool, and a How-To Webinar on August 27, 2013. PDF icon mf_content_now_available.pdf More Documents & Publications The Standard Work Specifications for Single-Family Home Energy Upgrades are now available at your fingertips!
U.S. Energy Information Administration (EIA) Indexed Site
File 11: District Steam and Hot Water (CBECS89.A11) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format CASEID Building identifier BLDGID4 1- 5 Census region REGION4 7- 7 $REGION. Census division CENDIV4 9- 9 $CENDIV. B2 Square footage SQFTC4 11- 12 $SQFTC. Principal building activity PBA4 14- 15 $ACTIVTY. F3 Year construction was completed YRCONC4 17- 18 $YRCONC. Adjusted weight ADJWT4 20- 27 Variance stratum STRATUM4 29- 30 Pair indicator PAIR4 32- 32
U.S. Energy Information Administration (EIA) Indexed Site
File 12: District Chilled Water (CBECS89.A12) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format CASEID Building identifier BLDGID4 1- 5 Census region REGION4 7- 7 $REGION. Census division CENDIV4 9- 9 $CENDIV. B2 Square footage SQFTC4 11- 12 $SQFTC. Principal building activity PBA4 14- 15 $ACTIVTY. F3 Year construction was completed YRCONC4 17- 18 $YRCONC. Adjusted weight ADJWT4 20- 27 Variance stratum STRATUM4 29- 30 Pair indicator PAIR4 32- 32
U.S. Energy Information Administration (EIA) Indexed Site
9: Natural Gas (CBECS89.A09) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format CASEID Building identifier BLDGID4 1- 5 Census region REGION4 7- 7 $REGION. Census division CENDIV4 9- 9 $CENDIV. B2 Square footage SQFTC4 11- 12 $SQFTC. Principal building activity PBA4 14- 15 $ACTIVTY. F3 Year construction was completed YRCONC4 17- 18 $YRCONC. P2 Interruptible natural gas service NGINTR4 20- 20 $YESNO. Adjusted weight ADJWT4 22- 29 Variance stratum STRATUM4
Sample variance in weak lensing: How many simulations are required?
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Petri, Andrea; May, Morgan; Haiman, Zoltan
2016-03-24
Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension Nb with its simulated counterpart. An accurate estimate of the Nb × Nb feature covariance matrix C is essential to obtain accurate parameter confidence intervals. When C is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of Nr realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number Ns ≤ Nr of independent ray-tracing N-body simulations. We study parameter confidence intervalsmore » as a function of (Ns, Nr) in the range 1 ≤ Ns ≤ 200 and 1 ≤ Nr ≲ 105. Previous work [S. Dodelson and M. D. Schneider, Phys. Rev. D 88, 063537 (2013)] has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an O(1/Nr) degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional O(1/N2r) error degradation that is not negligible when Nr is only a factor of few larger than Nb. We study the large Nr limit, and find that a single, 240 Mpc/h sized 5123-particle N-body simulation (Ns = 1) can be repeatedly recycled to produce as many as Nr = few × 104 shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. Lastly, a small number of simulations (Ns = 1 or 2) is sufficient to forecast parameter confidence intervals at percent accuracy.« less
EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports
Broader source: Energy.gov [DOE]
This EVMS Training Snippet, sponsored by the Office of Project Management (PM) is one in a series regarding PARS II Analysis reports. PARS II offers direct insight into EVM project data from the...
EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
sponsored by the Office of Project Management (PM) is one in a series regarding PARS II Analysis reports. PARS II offers direct insight into EVM project data from the...
A Clock Synchronization Strategy for Minimizing Clock Variance...
Office of Scientific and Technical Information (OSTI)
DOE Contract Number: DE-AC05-00OR22725 Resource Type: Conference Resource Relation: Conference: SBAC PAD 2010, Rio De Janeiro, Brazil, 20101027, 20101030 Research Org: Oak Ridge ...
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M. (Cedar Crest, NM); Ma, Tian J. (Albuquerque, NM)
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Permits and Variances for Solar Panels, Calculation of Impervious...
Broader source: Energy.gov (indexed) [DOE]
construction, or stormwater may only include the foundation or base supporting the solar panel. The law generally applies statewide, including charter counties and Baltimore...
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN
Shine, G.
2014-11-25
The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple mean of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.
Illinois Waiver letter on variances from UL ruling on E85 dispensers
Microsoft PowerPoint - Snippet 5.4 PARS II Analysis-Variance...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
In PARS II under the SSS Reports selection on the left, there are folders to the right. ... warning signs of future problems. To drill down, we need to view other PARS II reports. ...
No-migration variance petition. Appendices C--J: Volume 5, Revision 1
Not Available
1990-03-01
Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant.
No-migration variance petition. Appendices A--B: Volume 2, Revision 1
Not Available
1990-03-01
Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included.
Orthogonal control of expression mean and variance by epigenetic features at different genomic loci
Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; Schaffer, David V.; Arkin, Adam P.
2015-05-05
While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH and protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.
Waste Isolation Pilot Plant No-migration variance petition. Addendum: Volume 7, Revision 1
Not Available
1990-03-01
This report describes various aspects of the Waste Isolation Pilot Plant (WIPP) including design data, waste characterization, dissolution features, ground water hydrology, natural resources, monitoring, general geology, and the gas generation/test program.
Orthogonal control of expression mean and variance by epigenetic features at different genomic loci
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; Schaffer, David V.; Arkin, Adam P.
2015-05-05
While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH andmore » protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.« less
No-migration variance petition. Appendix B, Attachments E--Q: Volume 4, Revision 1
Not Available
1990-03-01
Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program.
Fischer, N.T.
1990-03-01
This document reports data collected as part of the Ecological Monitoring Program (EMP) at the Waste Isolation Pilot Plant near Carlsbad, New Mexico, for Calendar Year 1987. Also included are data from the last quarter (October through December) of 1986. This report divides data collection activities into two parts. Part A covers general environmental monitoring which includes meteorology, aerial photography, air quality monitoring, water quality monitoring, and wildlife population surveillance. Part B focuses on the special studies being performed to evaluate the impacts of salt dispersal from the site on the surrounding ecosystem. The fourth year of salt impact monitoring was completed in 1987. These studies involve the monitoring of soil chemistry, soil microbiota, and vegetation in permanent study plots. None of the findings indicate that the WIPP project is adversely impacting environmental quality at the site. As in 1986, breeding bird censuses completed this year indicate changes in the local bird fauna associated with the WIPP site. The decline in small mammal populations noted in the 1986 census is still evident in the 1987 data; however, populations are showing signs of recovery. There is no indication that this decline is related to WIPP activities. Rather, the evidence indicates that natural population fluctuations may be common in this ecosystem. The salt impact studies continue to reveal some short-range transport of salt dust from the saltpiles. This material accumulates at or near the soil surface during the dry seasons in areas near the saltpiles, but is flushed deeper into the soil during the rainy season. Microbial activity does not appear to be affected by this salt importation. Vegetation coverage and density data from 1987 also do not show any detrimental effect associated with aerial dispersal of salt.
Broader source: Energy.gov [DOE]
Letter to Joseph N. Herndon from Bruce M. Diamond, Assistant General Counsel for Environment, dated September 19, 2008.
Worseck, Gabor; Xavier Prochaska, J. [Department of Astronomy and Astrophysics, UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); McQuinn, Matthew [Department of Astronomy, University of California, 601 Campbell Hall, Berkeley, CA 94720 (United States); Dall'Aglio, Aldo; Wisotzki, Lutz [Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam (Germany); Fechner, Cora; Richter, Philipp [Institut fuer Physik und Astronomie, Universitaet Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam (Germany); Hennawi, Joseph F. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany); Reimers, Dieter, E-mail: gworseck@ucolick.org [Hamburger Sternwarte, Universitaet Hamburg, Gojenbergsweg 112, 21029 Hamburg (Germany)
2011-06-01
We report on the detection of strongly varying intergalactic He II absorption in HST/COS spectra of two z{sub em} {approx_equal} 3 quasars. From our homogeneous analysis of the He II absorption in these and three archival sightlines, we find a marked increase in the mean He II effective optical depth from <{tau}{sub eff},He{sub ii}>{approx_equal}1 at z {approx_equal} 2.3 to <{tau}{sub eff},He{sub ii}>{approx}>5 at z {approx_equal} 3.2, but with a large scatter of 2{approx}<{tau}{sub eff},He{sub ii}{approx}<5 at 2.7 < z < 3 on scales of {approx}10 proper Mpc. This scatter is primarily due to fluctuations in the He II fraction and the He II-ionizing background, rather than density variations that are probed by the coeval H I forest. Semianalytic models of He II absorption require a strong decrease in the He II-ionizing background to explain the strong increase of the absorption at z {approx}> 2.7, probably indicating He II reionization was incomplete at z{sub reion} {approx}> 2.7. Likewise, recent three-dimensional numerical simulations of He II reionization qualitatively agree with the observed trend only if He II reionization completes at z{sub reion} {approx_equal} 2.7 or even below, as suggested by a large {tau}{sub eff},He{sub ii}{approx}>3 in two of our five sightlines at z < 2.8. By doubling the sample size at 2.7 {approx}< z {approx}< 3, our newly discovered He II sightlines for the first time probe the diversity of the second epoch of reionization when helium became fully ionized.
Y-12s Training and Technology instructors story ? Terry...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
storied about things that took place at TAT. There were people from every race, every religion and every social stratum there so you can imagine. Most of them, however, can't be...
Method for in situ heating of hydrocarbonaceous formations
Little, William E.; McLendon, Thomas R.
1987-01-01
A method for extracting valuable constituents from underground hydrocarbonaceous deposits such as heavy crude tar sands and oil shale is disclosed. Initially, a stratum containing a rich deposit is hydraulically fractured to form a horizontally extending fracture plane. A conducting liquid and proppant is then injected into the fracture plane to form a conducting plane. Electrical excitations are then introduced into the stratum adjacent the conducting plate to retort the rich stratum along the conducting plane. The valuable constituents from the stratum adjacent the conducting plate are then recovered. Subsequently, the remainder of the deposit is also combustion retorted to further recover valuable constituents from the deposit. Various R.F. heating systems are also disclosed for use in the present invention.
U.S. Energy Information Administration (EIA) Indexed Site
File02: (file02cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage SQFTC17. BCWM2C Principal activity BCWOM25. ...
EVENT TREE ANALYSIS AT THE SAVANNAH RIVER SITE: A CASE HISTORY
Williams, R
2009-05-25
At the Savannah River Site (SRS), a Department of Energy (DOE) installation in west-central South Carolina there is a unique geologic stratum that exists at depth that has the potential to cause surface settlement resulting from a seismic event. In the past the particular stratum in question has been remediated via pressure grouting, however the benefits of remediation have always been debatable. Recently the SRS has attempted to frame the issue in terms of risk via an event tree or logic tree analysis. This paper describes that analysis, including the input data required.
RAPID/Roadmap/18-HI-d | Open Energy Information
Geothermal Hydropower Solar Tools Contribute Contact Us Variance from Pollution Control (18-HI-d) A variance is required to discharge water pollutant in excess of applicable...
Determination of Dusty Particle Charge Taking into Account Ion Drag
Ramazanov, T. S.; Dosbolayev, M. K.; Jumabekov, A. N.; Amangaliyeva, R. Zh.; Orazbayev, S. A.; Petrov, O. F.; Antipov, S. N.
2008-09-07
This work is devoted to the experimental estimation of charge of dust particle that levitates in the stratum of dc glow discharge. Particle charge is determined on the basis of the balance between ion drag force, gravitational and electric forces. Electric force is obtained from the axial distribution of the light intensity of strata.
Impact of federal regulations on the small coal mine in Appalachia. Final report
Davis, B.; Ferrell, R.
1980-11-01
This report contains the results of a study of the total costs of compliance with federal regulations of coal mines in Eastern Kentucky. The mines were stratified by tonnage per year and employment. Mail and personal interview surveys were conducted for each stratum. Survey results attempt to suggest the competitive position of small concerns and to form a basis for necessary modifications in regulations.
Burke, Timothy Patrick; Kiedrowski, Brian; Martin, William R.; Brown, Forrest B.
2015-08-27
KDEs show potential reducing variance for global solutions (flux, reaction rates) when compared to histogram solutions.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
August /newsroom/_assets/images/newsroom-icon.jpg August We are your source for reliable, up-to-date news and information; our scientists and engineers can provide technical insights on our innovations for a secure nation. Artist's rendition of a cross section of skin layers (stratum corneum, epidermis and dermis) showing topical application of an ionic liquid for combating a skin-borne bacterial infection. The ionic liquid can be formulated with Breakthrough antibacterial approach could resolve
Module 6- Metrics, Performance Measurements and Forecasting
Broader source: Energy.gov [DOE]
This module reviews metrics such as cost and schedule variance along with cost and schedule performance indices.
Gauging apparatus and method, particularly for controlling mining by a mining machine
Campbell, J.A.; Moynihan, D.J.
1980-04-29
Apparatus for and method are claimed for controlling the mining by a mining machine of a seam of material (e.g., coal) overlying or underlying a stratum of undesired material (e.g., clay) to reduce the quantity of undesired material mined with the desired material, the machine comprising a cutter movable up and down and adapted to cut down into a seam of coal on being lowered. The control apparatus comprises a first electrical signal constituting a slow-down signal adapted to be automatically operated to signal when the cutter has cut down into a seam of desired material generally to a predetermined depth short of the interface between the seam and the underlying stratum for slowing down the cutting rate as the cutter approaches the interface, and a second electrical signal adapted to be automatically operated subsequent to the first signal for signalling when the cutter has cut down through the seam to the interface for stopping the cutting operation, thereby to avoid mining undesired material with the desired material. Similar signalling may be provided on an upward cut to avoid cutting into the overlying stratum.
Parameters Covariance in Neutron Time of Flight Analysis Explicit Formulae
Odyniec, M.; Blair, J.
2014-12-01
We present here a method that estimates the parameters variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
Clock Synchronization in High-end Computing Environments: A Strategy for
Office of Scientific and Technical Information (OSTI)
Minimizing Clock Variance at Runtime (Journal Article) | SciTech Connect Clock Synchronization in High-end Computing Environments: A Strategy for Minimizing Clock Variance at Runtime Citation Details In-Document Search Title: Clock Synchronization in High-end Computing Environments: A Strategy for Minimizing Clock Variance at Runtime We present a new software-based clock synchronization scheme that provides high precision time agreement among distributed memory nodes. The technique is
Geological reasons for rapid water encroachment in wells at Sutorma oil field
Arkhipov, S.V.; Dvorak, S.V.; Sonich, V.P.; Nikolayeva, Ye.V.
1987-12-01
The Sutorma oil field on the northern Surgut dome is one of the new fields in West Siberia. It came into production in 1982, but already by 1983 it was found that the water contents in the fluids produced were much greater than the design values. The adverse effects are particularly pronounced for the main reservoir at the deposit, the BS/sub 10//sup 2/ stratum. Later, similar problems occurred at other fields in the Noyarbr and Purpey regions. It is therefore particularly important to elucidate the geological reasons for water encroachment.
U.S. Energy Information Administration (EIA) Indexed Site
File02: (file02_cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage $SQFTC17. BCWM2C Principal activity $BCWOM25. YRCONC2C Year constructed $YRCONC15 REGION2 Census region $REGION13 XSECWT2 Cross-sectional weight ELSUPL2N Supplier reported electricity use $YESNO15. NGSUPL2N Supplier reported natural gas use $YESNO15. FKSUPL2N Supplier reported fuel oil use $YESNO15. STSUPL2N Supplier reported steam use $YESNO15. PRSUPL2N Supplier
U.S. Energy Information Administration (EIA) Indexed Site
File02: (file02_cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage $SQFTC17. BCWM2C Principal activity $BCWOM25. YRCONC2C Year constructed $YRCONC15 REGION2 Census region $REGION13 XSECWT2 Cross-sectional weight ELSUPL2N Supplier reported electricity use $YESNO15. NGSUPL2N Supplier reported natural gas use $YESNO15. FKSUPL2N Supplier reported fuel oil use $YESNO15. STSUPL2N Supplier reported steam use $YESNO15. PRSUPL2N Supplier
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
... Steps 7 - 12 Page 12 OTBOTS Example OTBOTS Example Using SPA * Eliminate Both Cost and Schedule Variances * Least preferred method * May require retroactive changes to in-process ...
Sub-daily Statistical Downscaling of Meteorological Variables...
Office of Scientific and Technical Information (OSTI)
and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
X Cum CPI 3 Period Moving Average 06302013 07312013 08312013 09302013 1031... format * Advantage of this report is Excel Sort feature to view variances from ...
DOE-SEEO-OE0000223 OFFICE OF ELECTRICITY ...
Office of Scientific and Technical Information (OSTI)
... Actual Finish Date Milestone Variance Narrative 1 High capacity cell packaged - no ... The results validated the master control modules ability to monitor SOC and control the ...
ARM - Publications: Science Team Meeting Documents
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
vertical and horizontal components, variance and vertical flux of the prognostic thermodynamic variables as well as momentum flux are also presented. The most interesting aspect...
National Nuclear Security Administration (NNSA)
... AEC U.S. Atomic Energy Commission AIP Agreement in Principle AIRFA American Indian Religious Freedom Act ANOVA Analysis of Variance APCD Air Pollution Control Division ARLSORD Air ...
Mercury In Soils Of The Long Valley, California, Geothermal System...
Additional samples were collected in an analysis of variance design to evaluate natural variability in soil composition with sampling interval distance. The primary...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Variance Proposals, and FY 2013 Waste Sampling and ... the B Reactor as the world's first production reactor. ... Statement - On January 22, 2015, RL approved the Final ...
Fuel cell stack monitoring and system control
Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.
2004-02-17
A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell.
Biomass productivitiy technology advacement towards a commercially...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
... that confer improved performance and robustness across abiotic, biotic, and nutritional gradients, decrease the annual variance of algal biomass productivity. 6 High-level ...
Gasoline and Diesel Fuel Update (EIA)
Gas Company, for example, on Tuesday, October 21, issued a system overrun limitation (SOL) that allows for penalties on variances between flows and nominations. The SOL is in...
Natural Gas Weekly Update, Printer-Friendly Version
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
Gas Company, for example, on Tuesday, October 21, issued a system overrun limitation (SOL) that allows for penalties on variances between flows and nominations. The SOL is in...
Microsoft PowerPoint - ARMST2007_mp.ppt
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
height resolved vertical velocity, and turbulence derived from the horizontal variance of radar Doppler velocity Method 1) Identify regions containing cloud liquid (see E. Luke et...
Nevada State Environmental Commission | Open Energy Information
variance requests is selected program areas administrated by NDEP as well as ratify air pollution enforcement actions (settlement agreements). Nevada State Environmental...
Broader source: Energy.gov (indexed) [DOE]
CV, VAC, & EAC Trends - Management Reserve (MR) Log - Performance Index trends (WBS Level) - Variance Analysis Cumulative (WBS Level) * EAC Reasonableness - CPI vs. TCPI (PMB ...
Microsoft Word - RIN05110265_06010295_DVP.doc
Office of Legacy Management (LM)
... Field Variance: None. Quality Control Sample Cross Reference: Ticket Number Sample ID ... They were using a generator that was DOE property to power the pump used for water ...
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (OSTI)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
spectrum (exponent 53). 2. A WP model, in which the upper boundary is an independent Gaussian process (mean and variance ) with exponential correlation function (correlation...
"Lidar Investigations of Aerosol, Cloud, and Boundary Layer Properties...
Office of Scientific and Technical Information (OSTI)
the turbulence within the convective boundary layer and how the turbulence statistics (e.g., variance, skewness) is correlated with larger scale variables predicted by models. ...
U.S. Energy Information Administration (EIA) Indexed Site
Proceedings from the ACEEE Summer Study on Energy Efficiency in Buildings, 1992 17. Error terms are heteroscedastic when the variance of the error terms is not constant but,...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Research Community Climate Model (CCM2). The CSU eterized in terms of the grid cell mean and subgrid RAMS cloud microphysics parameterization predicts mass variance of...
Notes on the Lumped Backward Master Equation for the Neutron...
Office of Scientific and Technical Information (OSTI)
with a knowledge of low order statistical averages (variance, correlation), provides an incomplete and very unsatisfactory description of the state of the neutron population. ...
TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...
Office of Scientific and Technical Information (OSTI)
and variance that was accurate within for all variables except atmospheric pressure wind speed and precipitation Correlations between downscaled output and the expected...
"Title","Creator/Author","Publication Date","OSTI Identifier...
Office of Scientific and Technical Information (OSTI)
and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected...
Evaluation of three lidar scanning strategies for turbulence measurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.
2015-11-24
Several errors occur when a traditional Doppler-beam swinging (DBS) or velocityazimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates somemoreof the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.less
McKee, Rodney A.; Walker, Frederick J.
2003-11-25
A crystalline oxide-on-semiconductor structure and a process for constructing the structure involves a substrate of silicon, germanium or a silicon-germanium alloy and an epitaxial thin film overlying the surface of the substrate wherein the thin film consists of a first epitaxial stratum of single atomic plane layers of an alkaline earth oxide designated generally as (AO).sub.n and a second stratum of single unit cell layers of an oxide material designated as (A'BO.sub.3).sub.m so that the multilayer film arranged upon the substrate surface is designated (AO).sub.n (A'BO.sub.3).sub.m wherein n is an integer repeat of single atomic plane layers of the alkaline earth oxide AO and m is an integer repeat of single unit cell layers of the A'BO.sub.3 oxide material. Within the multilayer film, the values of n and m have been selected to provide the structure with a desired electrical structure at the substrate/thin film interface that can be optimized to control band offset and alignment.
Hanford Site performance summary -- EM funded programs, July 1995
Schultz, E.A.
1995-07-01
Performance data for July 1995 reflects a 4% unfavorable schedule variance and is an improvement over June 1995. The majority of the behind schedule condition is attributed to EM-30, (Office of Waste Management). The majority of the EM-30 schedule variance is associated with the Tank Waste Remediation System (TWRS) Program. The TWRS schedule variance is attributed to the delay in obtaining key decision 0 (KD-0) for Project W-314, ``Tank Farm Restoration and Safe Operations`` and the Multi-Function Waste Tank Facility (MWTF) workscope still being a part of the baseline. Baseline Change Requests (BCRs) are in process rebaselining Project W-314 and deleting the MWTF from the TWRS baseline. Once the BCR`s are approved and implemented, the overall schedule variance will be reduced to $15.0 million. Seventy-seven enforceable agreement milestones were scheduled FYTD. Seventy-one (92%) of the seventy-seven were completed on or ahead of schedule, two were completed late and four are delinquent. Performance data reflects a continued significant favorable cost variance of $124.3 million (10%). The cost variance is attributed to process improvements/efficiencies, elimination of low-value work, workforce reductions and is expected to continue for the remainder of this fiscal year. A portion of the cost variance is attributed to a delay in billings which should self-correct by fiscal year-end.
Detailed Studies of Hydrocarbon Radicals: C2H Dissociation
Wittig, Curt
2014-10-06
A novel experimental technique was examined whose goal was the ejection of radical species into the gas phase from a platform (film) of cold non-reactive material. The underlying principle was one of photo-initiated heat release in a stratum that lies below a layer of CO2 or a layer of amorphous solid water (ASW) and CO2. A molecular precursor to the radical species of interest is deposited near or on the film's surface, where it can be photo-dissociated. It proved unfeasible to avoid the rampant formation of fissures, as opposed to large "flakes." This led to many interesting results, but resulted in our aborting the scheme as a means of launching cold C2H radical into the gas phase. A journal article resulted that is germane to astrophysics but not combustion chemistry.
Hsu, Bertrand D.; Leonard, Gary L.
1988-01-01
A fuel injection system particularly adapted for injecting coal slurry fuels at high pressures includes an accumulator-type fuel injector which utilizes high-pressure pilot fuel as a purging fluid to prevent hard particles in the fuel from impeding the opening and closing movement of a needle valve, and as a hydraulic medium to hold the needle valve in its closed position. A fluid passage in the injector delivers an appropriately small amount of the ignition-aiding pilot fuel to an appropriate region of a chamber in the injector's nozzle so that at the beginning of each injection interval the first stratum of fuel to be discharged consists essentially of pilot fuel and thereafter mostly slurry fuel is injected.
Microsoft PowerPoint - Snippet 1.4 EVMS Stage 2 Surveillance...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
... standard readable format (e.g., X12, XML format), EVMS monthly reports; EVM variance ... standard readable format (e.g., X12, XML format); risk management plans; the EVM ...
August 2012 Electrical Safety Occurrences
was the path of the light circuit as depicted on the site map. The locate did give a true signal of depth and variance of an underground utility. When the excavation, which was...
Solar and Wind Easements & Local Option Rights Laws
Broader source: Energy.gov [DOE]
Minnesota law also allows local zoning boards to restrict development for the purpose of protecting access to sunlight. In addition, zoning bodies may create variances in zoning rules in...
Boundary Layer Cloud Turbulence Characteristics
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Boundary Layer Cloud Turbulence Characteristics Virendra Ghate Bruce Albrecht Parameter Observational Readiness (/10) Modeling Need (/10) Cloud Boundaries 9 9 Cloud Fraction Variance Skewness Up/Downdraft coverage Dominant Freq. signal Dissipation rate ??? Observation-Modeling Interface
Fuel cell stack monitoring and system control
Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.
2005-01-25
A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell. Other polarization curves may be generated and used for fuel cell stack monitoring based on different operating pressures, temperatures, hydrogen quantities.
Model-Based Sampling and Inference
U.S. Energy Information Administration (EIA) Indexed Site
... Sarndal, C.-E., Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer- Verlag. Steel, P.M. and Shao, J. (1997), "Estimation of Variance Due to Imputation in ...
Broader source: Energy.gov (indexed) [DOE]
... The following chart represents the variance between prime ... Solutions Hanford 200 87 287 Battelle Memorial Institute PNNL 27 114 141 UT-Battelle ORNL 41 161 202 Bechtel Jacobs ...
FY14 BEN FOA- ORNL - University - Industry Partnership to Improve...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
residential air conditioners in Phoenix, Arizona," ASHRAE Transactions, 103(1), 406-415, ... Variances: None Cost to Date: 730.9k Additional Funding: None Budget History October ...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... In the halo model framework, bias is scale dependent with a change of slope at the ... We use N-body simulations to quantify both the amount of cosmic variance and systematic ...
Factors Controlling The Geochemical Evolution Of Fumarolic Encrustatio...
Smokes (VTTS). The six-factor solution model explains a large proportion (low of 74% for Ni to high of 99% for Si) of the individual element data variance. Although the primary...
ARM - Publications: Science Team Meeting Documents
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
best represented by power laws in the scale-parameter; 2) "intermittency" hence non-Gaussian statistics, i.e., not reducible to means, variances and covariances; and 3)...
Solar Water Heating Requirement for New Residential Construction
Broader source: Energy.gov [DOE]
As of January 1, 2010, building permits may not be issued for new single-family homes that do not include a SWH system. The state energy resources coordinator may provide a variance for this...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
12. 13. CODE Total Contract Variance Labor Reporting Period Cumulative to Date Balance c. e. a. Subse- quent Reporting Period Total of Fiscal Year (1) (2) (3) d. Subse-...
Posters A Stratiform Cloud Parameterization for General Circulation...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
P(w) is the probability distribution of vertical velocity, determined from the predicted mean and variance of vertical velocity. Application to a Single-Column Model To test the...
Search for: All records | DOE PAGES
Office of Scientific and Technical Information (OSTI)
... Furthermore, linear and bi-linear correlations between OA, CO, and each of three biogenic tracers, "Bio", for individual plume transects indicate that most of the variance in OA ...
Gate fidelity fluctuations and quantum process invariants
Magesan, Easwar; Emerson, Joseph [Institute for Quantum Computing and Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Blume-Kohout, Robin [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2011-07-15
We characterize the quantum gate fidelity in a state-independent manner by giving an explicit expression for its variance. The method we provide can be extended to calculate all higher order moments of the gate fidelity. Using these results, we obtain a simple expression for the variance of a single-qubit system and deduce the asymptotic behavior for large-dimensional quantum systems. Applications of these results to quantum chaos and randomized benchmarking are discussed.
Influential input classification in probabilistic multimedia models
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.; Geng, Shu
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions one should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.
Veil, J.A.; VanKuiken, J.C.; Folga, S.; Gillette, J.L.
1993-01-01
Many power plants discharge large volumes of cooling water. In some cases, the temperature of the discharge exceeds state thermal requirements. Section 316(a) of the Clean Water Act (CWA) allows a thermal discharger to demonstrate that less stringent thermal effluent limitations would still protect aquatic life. About 32% of the total steam electric generating capacity in the United States operates under Section 316(a) variances. In 1991, the US Senate proposed legislation that would delete Section 316(a) from the CWA. This study, presented in two companion reports, examines how this legislation would affect the steam electric power industry. This report quantitatively and qualitatively evaluates the energy and environmental impacts of deleting the variance. No evidence exists that Section 316(a) variances have caused any widespread environmental problems. Conversion from once-through cooling to cooling towers would result in a loss of plant output of 14.7-23.7 billion kilowatt-hours. The cost to make up the lost energy is estimated at $12.8-$23.7 billion (in 1992 dollars). Conversion to cooling towers would increase emission of pollutants to the atmosphere and water loss through evaporation. The second report describes alternatives available to plants that currently operate under the variance and estimates the national cost of implementing such alternatives. Little justification has been found for removing the 316(a) variance from the CWA.
Variation and correlation of hydrologic properties
Wang, J.S.Y.
1991-06-01
Hydrological properties vary within a given geological formation and even more so among different soil and rock media. The variance of the saturated permeability is shown to be related to the variance of the pore-size distribution index of a given medium by a simple equation. This relationship is deduced by comparison of the data from Yucca Mountain, Nevada (Peters et al., 1984), Las Cruces, New Mexico (Wierenga et al., 1989), and Apache Leap, Arizona (Rasmussen et al., 1990). These and other studies in different soils and rocks also support the Poiseuille-Carmen relationship between the mean value of saturated permeability and the mean value of capillary radius. Correlations of the mean values and variances between permeability and pore-geometry parameters can lead us to better quantification of heterogeneous flow fields and better understanding of the scaling laws of hydrological properties.
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
Water Vapor Turbulence Profiles in Stationary Continental Convective Mixed Layers
Turner, D. D.; Wulfmeyer, Volker; Berg, Larry K.; Schween, Jan
2014-10-08
The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program’s Raman lidar at the ARM Southern Great Plains (SGP) site in north-central Oklahoma has collected water vapor mixing ratio (q) profile data more than 90% of the time since October 2004. Three hundred (300) cases were identified where the convective boundary layer was quasi-stationary and well-mixed for a 2-hour period, and q mean, variance, third order moment, and skewness profiles were derived from the 10-s, 75-m resolution data. These cases span the entire calendar year, and demonstrate that the q variance profiles at the mixed layer (ML) top changes seasonally, but is more related to the gradient of q across the interfacial layer. The q variance at the top of the ML shows only weak correlations (r < 0.3) with sensible heat flux, Deardorff convective velocity scale, and turbulence kinetic energy measured at the surface. The median q skewness profile is most negative at 0.85 zi, zero at approximately zi, and positive above zi, where zi is the depth of the convective ML. The spread in the q skewness profiles is smallest between 0.95 zi and zi. The q skewness at altitudes between 0.6 zi and 1.2 zi is correlated with the magnitude of the q variance at zi, with increasingly negative values of skewness observed lower down in the ML as the variance at zi increases, suggesting that in cases with larger variance at zi there is deeper penetration of the warm, dry free tropospheric air into the ML.
Gajjar, Rachna M.; Kasting, Gerald B.
2014-11-15
The overall goal of this research was to further develop and improve an existing skin diffusion model by experimentally confirming the predicted absorption rates of topically-applied volatile organic compounds (VOCs) based on their physicochemical properties, the skin surface temperature, and the wind velocity. In vitro human skin permeation of two hydrophilic solvents (acetone and ethanol) and two lipophilic solvents (benzene and 1,2-dichloroethane) was studied in Franz cells placed in a fume hood. Four doses of each {sup 14}C-radiolabed compound were tested — 5, 10, 20, and 40 μL cm{sup −2}, corresponding to specific doses ranging in mass from 5.0 to 63 mg cm{sup −2}. The maximum percentage of radiolabel absorbed into the receptor solutions for all test conditions was 0.3%. Although the absolute absorption of each solvent increased with dose, percentage absorption decreased. This decrease was consistent with the concept of a stratum corneum deposition region, which traps small amounts of solvent in the upper skin layers, decreasing the evaporation rate. The diffusion model satisfactorily described the cumulative absorption of ethanol; however, values for the other VOCs were underpredicted in a manner related to their ability to disrupt or solubilize skin lipids. In order to more closely describe the permeation data, significant increases in the stratum corneum/water partition coefficients, K{sub sc}, and modest changes to the diffusion coefficients, D{sub sc}, were required. The analysis provided strong evidence for both skin swelling and barrier disruption by VOCs, even by the minute amounts absorbed under these in vitro test conditions. - Highlights: • Human skin absorption of small doses of VOCs was measured in vitro in a fume hood. • The VOCs tested were ethanol, acetone, benzene and 1,2-dichloroethane. • Fraction of dose absorbed for all compounds at all doses tested was less than 0.3%. • The more aggressive VOCs absorbed at higher levels than diffusion model predictions. • We conclude that even small exposures to VOCs temporarily alter skin permeability.
Statistical Analysis of Tank 5 Floor Sample Results
Shine, E. P.
2013-01-31
Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.
Shukla, K. K.; Phanikumar, D. V.; Newsom, Rob K.; Kumar, Niranjan; Ratnam, Venkat; Naja, M.; Singh, Narendra
2014-03-01
A Doppler lidar was installed at Manora Peak, Nainital (29.4 N; 79.2 E, 1958 amsl) to estimate mixing layer height for the first time by using vertical velocity variance as basic measurement parameter for the period September-November 2011. Mixing layer height is found to be located ~0.57 +/- 0.1and 0.45 +/- 0.05km AGL during day and nighttime, respectively. The estimation of mixing layer height shows good correlation (R>0.8) between different instruments and with different methods. Our results show that wavelet co-variance transform is a robust method for mixing layer height estimation.
SUPERIMPOSED MESH PLOTTING IN MCNP
J. HENDRICKS
2001-02-01
The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Conversion of borehole Stoneley waves to channel waves in coal
Johnson, P.A.; Albright, J.N.
1987-01-01
Evidence for the mode conversion of borehole Stoneley waves to stratigraphically guided channel waves was discovered in data from a crosswell acoustic experiment conducted between wells penetrating thin coal strata located near Rifle, Colorado. Traveltime moveout observations show that borehole Stoneley waves, excited by a transmitter positioned at substantial distances in one well above and below a coal stratum at 2025 m depth, underwent partial conversion to a channel wave propagating away from the well through the coal. In an adjacent well the channel wave was detected at receiver locations within the coal, and borehole Stoneley waves, arising from a second partial conversion of channel waves, were detected at locations above and below the coal. The observed channel wave is inferred to be the third-higher Rayleigh mode based on comparison of the measured group velocity with theoretically derived dispersion curves. The identification of the mode conversion between borehole and stratigraphically guided waves is significant because coal penetrated by multiple wells may be detected without placing an acoustic transmitter or receiver within the waveguide. 13 refs., 6 figs., 1 tab.
Portable measurement system for soil resistivity and application to Quaternary clayey sediment
Nakagawa, Koichi; Morii, Takeo
1999-07-01
A simple device to measure electrical resistivity has been developed for field and laboratory use. The measurement system comprises a probe unit, current wave generator, amplified, A/D converter, data acquisition unit with RS-232C interface and notebook personal computer. The system is applicable to soils and soft rocks as long as the probe needles can pierce into them. Frequency range of the measurement system extends from 100 Hz to 10 MHz. The total error of the system is less than 5%. In situ measurements of the resistivity and shear resistance by means of pocket-sized penetrometer were applied to Pleistocene clayey beds. Some laboratory tests were also conducted to examine the interpretation of the in situ resistivity. Marine and non-marine clayey sediments are different in their resistivities of the stratum by in situ test and the clay suspension sampled from the strata. Physical and mechanical properties were compared with the resistivity and general relationships among them were explored to clarify the characteristics of inter-particle bonding. Some possible mechanism regarding the peculiar weathering of clayey sediment or mudstone beds is discussed from the viewpoint of physico-chemical process, which is conspicuous especially near the ground surface.
Feingold, G.; Frisch, A.S.; Cotton, W.R.
1999-09-01
Cloud radar, microwave radiometer, and lidar remote sensing data acquired during the Atlantic Stratocumulus Transition Experiment (ASTEX) are analyzed to address the relationship between (1) drop number concentration and cloud turbulence as represented by vertical velocity and vertical velocity variance and (2) drizzle formation and cloud turbulence. Six cases, each of about 12 hours duration, are examined; three of these cases are characteristic of nondrizzling boundary layers and three of drizzling boundary layers. In all cases, microphysical retrievals are only performed when drizzle is negligible (radar reflectivity{lt}{minus}17dBZ). It is shown that for the cases examined, there is, in general, no correlation between drop concentration and cloud base updraft strength, although for two of the nondrizzling cases exhibiting more classical stratocumulus features, these two parameters are correlated. On drizzling days, drop concentration and cloud-base vertical velocity were either not correlated or negatively correlated. There is a significant positive correlation between drop concentration and mean in-cloud vertical velocity variance for both nondrizzling boundary layers (correlation coefficient r=0.45) and boundary layers that have experienced drizzle (r=0.38). In general, there is a high correlation (r{gt}0.5) between radar reflectivity and in-cloud vertical velocity variance, although one of the boundary layers that experienced drizzle exhibited a negative correlation between these parameters. However, in the subcloud region, all boundary layers that experienced drizzle exhibit a negative correlation between radar reflectivity and vertical velocity variance. {copyright} 1999 American Geophysical Union
Energy dependence of multiplicity fluctuations in heavy ion collisions at 20A to 158A GeV
Alt, C.; Blume, C.; Bramm, R.; Dinkelaker, P.; Flierl, D.; Kliemant, M.; Kniege, S.; Lungwitz, B.; Mitrovski, M.; Renfordt, R.; Schuster, T.; Stock, R.; Strabel, C.; Stroebele, H.; Utvic, M.; Wetzler, A.; Anticic, T.; Kadija, K.; Nicolic, V.; Susa, T.
2008-09-15
Multiplicity fluctuations of positively, negatively, and all charged hadrons in the forward hemisphere were studied in central Pb+Pb collisions at 20A,30A,40A,80A, and 158A GeV. The multiplicity distributions and their scaled variances {omega} are presented as functions of their dependence on collision energy as well as on rapidity and transverse momentum. The distributions have bell-like shapes and their scaled variances are in the range from 0.8 to 1.2 without any significant structure in their energy dependence. No indication of the critical point in fluctuations are observed. The string-hadronic ultrarelativistic quantum molecular dynamics (UrQMD) model significantly overpredicts the mean, but it approximately reproduces the scaled variance of the multiplicity distributions. The predictions of the statistical hadron-resonance gas model obtained within the grand-canonical and canonical ensembles disagree with the measured scaled variances. The narrower than Poissonian multiplicity fluctuations measured in numerous cases may be explained by the impact of conservation laws on fluctuations in relativistic systems.
North-South non-Gaussian asymmetry in Planck CMB maps
Bernui, A.; Oliveira, A.F.; Pereira, T.S. E-mail: adhimar@unifei.edu.br
2014-10-01
We report the results of a statistical analysis performed with the four foreground-cleaned Planck maps by means of a suitably defined local-variance estimator. Our analysis shows a clear dipolar structure in Planck's variance map pointing in the direction (l,b)?(220,-32), thus consistent with the North-South asymmetry phenomenon. Surprisingly, and contrary to previous findings, removing the CMB quadrupole and octopole makes the asymmetry stronger. Our results show a maximal statistical significance, of 98.1% CL, in the scales ranging from ?=4 to ?=500. Additionally, through exhaustive analyses of the four foreground-cleaned and individual frequency Planck maps, we find unlikely that residual foregrounds could be causing this dipole variance asymmetry. Moreover, we find that the dipole gets lower amplitudes for larger masks, evidencing that most of the contribution to the variance dipole comes from a region near the galactic plane. Finally, our results are robust against different foreground cleaning procedures, different Planck masks, pixelization parameters, and the addition of inhomogeneous real noise.
Sisterson, DL
2010-04-08
The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 – (ACTUAL/OPSMAX)], which accounts for unplanned downtime.
Effect of wettability on scale-up of multiphase flow from core-scale to reservoir fine-grid-scale
Chang, Y.C.; Mani, V.; Mohanty, K.K.
1997-08-01
Typical field simulation grid-blocks are internally heterogeneous. The objective of this work is to study how the wettability of the rock affects its scale-up of multiphase flow properties from core-scale to fine-grid reservoir simulation scale ({approximately} 10{prime} x 10{prime} x 5{prime}). Reservoir models need another level of upscaling to coarse-grid simulation scale, which is not addressed here. Heterogeneity is modeled here as a correlated random field parameterized in terms of its variance and two-point variogram. Variogram models of both finite (spherical) and infinite (fractal) correlation length are included as special cases. Local core-scale porosity, permeability, capillary pressure function, relative permeability functions, and initial water saturation are assumed to be correlated. Water injection is simulated and effective flow properties and flow equations are calculated. For strongly water-wet media, capillarity has a stabilizing/homogenizing effect on multiphase flow. For small variance in permeability, and for small correlation length, effective relative permeability can be described by capillary equilibrium models. At higher variance and moderate correlation length, the average flow can be described by a dynamic relative permeability. As the oil wettability increases, the capillary stabilizing effect decreases and the deviation from this average flow increases. For fractal fields with large variance in permeability, effective relative permeability is not adequate in describing the flow.
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 R S T 1 Tract - Variance - Corn ment TractADN C TractFromStruct 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26...
Entropic uncertainty relations in multidimensional position and momentum spaces
Huang Yichen
2011-05-15
Commutator-based entropic uncertainty relations in multidimensional position and momentum spaces are derived, twofold generalizing previous entropic uncertainty relations for one-mode states. They provide optimal lower bounds and imply the multidimensional variance-based uncertainty principle. The article concludes with an open conjecture.
Gas-storage calculations yield accurate cavern, inventory data
Mason, R.G. )
1990-07-02
This paper discusses how determining gas-storage cavern size and inventory variance is now possible with calculations based on shut-in cavern surveys. The method is the least expensive of three major methods and is quite accurate when recorded over a period of time.
Earned Value Management System (EVMS) Corrective Action Standard Operating
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Procedure | Department of Energy (EVMS) Corrective Action Standard Operating Procedure Earned Value Management System (EVMS) Corrective Action Standard Operating Procedure This EVMS Corrective Action Standard Operating Procedure (ECASOP) serves as PM's primary reference for development of Corrective Action Requests (CARs) and Continuous Improvement Opportunities (CIOs), as well as the assessment of contractors procedures and implementation associated with Variance Analysis Reports (VARs) and
Stochastic Inversion of Seismic Amplitude-Versus-Angle Data (Stinv-AVA)
Energy Science and Technology Software Center (OSTI)
2008-04-03
The software was developed to invert seismic amplitude-versus-angle (AVA) data using a Bayesian framework. The posterior probability distribution function is sampled by effective Markov chain Monte Carlo (MCMC) methods. The software could provide not only estimates of unknown variables but also varieties of information about uncertainty, such as the mean, mode, median, variance, and even probability density of each unknown.
A low dose simulation tool for CT systems with energy integrating detectors
Zabic, Stanislav; Morton, Thomas; Brown, Kevin M.; Wang Qiu
2013-03-15
Purpose: This paper introduces a new strategy for simulating low-dose computed tomography (CT) scans using real scans of a higher dose as an input. The tool is verified against simulations and real scans and compared to other approaches found in the literature. Methods: The conditional variance identity is used to properly account for the variance of the input high-dose data, and a formula is derived for generating a new Poisson noise realization which has the same mean and variance as the true low-dose data. The authors also derive a formula for the inclusion of real samples of detector noise, properly scaled according to the level of the simulated x-ray signals. Results: The proposed method is shown to match real scans in number of experiments. Noise standard deviation measurements in simulated low-dose reconstructions of a 35 cm water phantom match real scans in a range from 500 to 10 mA with less than 5% error. Mean and variance of individual detector channels are shown to match closely across the detector array. Finally, the visual appearance of noise and streak artifacts is shown to match in real scans even under conditions of photon-starvation (with tube currents as low as 10 and 80 mA). Additionally, the proposed method is shown to be more accurate than previous approaches (1) in achieving the correct mean and variance in reconstructed images from pure-Poisson noise simulations (with no detector noise) under photon-starvation conditions, and (2) in simulating the correct noise level and detector noise artifacts in real low-dose scans. Conclusions: The proposed method can accurately simulate low-dose CT data starting from high-dose data, including effects from photon starvation and detector noise. This is potentially a very useful tool in helping to determine minimum dose requirements for a wide range of clinical protocols and advanced reconstruction algorithms.
Dimensionality and noise in energy selective x-ray imaging
Alvarez, Robert E.
2013-11-15
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 10{sup 3}. With the soft tissue component, it is 2.7 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.
Clock Agreement Among Parallel Supercomputer Nodes
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Jones, Terry R.; Koenig, Gregory A.
2014-04-30
This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.
Resonant activation in a colored multiplicative thermal noise driven closed system
Ray, Somrita; Bag, Bidhan Chandra; Mondal, Debasish
2014-05-28
In this paper, we have demonstrated that resonant activation (RA) is possible even in a thermodynamically closed system where the particle experiences a random force and a spatio-temporal frictional coefficient from the thermal bath. For this stochastic process, we have observed a hallmark of RA phenomena in terms of a turnover behavior of the barrier-crossing rate as a function of noise correlation time at a fixed noise variance. Variance can be fixed either by changing temperature or damping strength as a function of noise correlation time. Our another observation is that the barrier crossing rate passes through a maximum with increase in coupling strength of the multiplicative noise. If the damping strength is appreciably large, then the maximum may disappear. Finally, we compare simulation results with the analytical calculation. It shows that there is a good agreement between analytical and numerical results.
Clock Agreement Among Parallel Supercomputer Nodes
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Jones, Terry R.; Koenig, Gregory A.
This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W.
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Self-Calibrated Cluster Counts as a Probe of Primordial Non-Gaussianity
Oguri, Masamune; /KIPAC, Menlo Park
2009-05-07
We show that the ability to probe primordial non-Gaussianity with cluster counts is drastically improved by adding the excess variance of counts which contains information on the clustering. The conflicting dependences of changing the mass threshold and including primordial non-Gaussianity on the mass function and biasing indicate that the self-calibrated cluster counts well break the degeneracy between primordial non-Gaussianity and the observable-mass relation. Based on the Fisher matrix analysis, we show that the count variance improves constraints on f{sub NL} by more than an order of magnitude. It exhibits little degeneracy with dark energy equation of state. We forecast that upcoming Hyper Suprime-cam cluster surveys and Dark Energy Survey will constrain primordial non-Gaussianity at the level {sigma}(f{sub NL}) {approx} 8, which is competitive with forecasted constraints from next-generation cosmic microwave background experiments.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B.; Meador, Richard J.; Sisk, Daniel R.; Hatley, Darrel D.; Brown, Daryl R.; Keibel, Gary R.; Gowri, Krishnan; Reyes-Spindola, Jorge F.; Adams, Kevin J.; Yates, Kenneth R.; Eschbach, Elizabeth J.; Stratton, Rex C.
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities
Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.; Salas, E.; Martin, N.
2008-01-15
The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has been included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.
Sparse matrix transform for fast projection to reduced dimension
Theiler, James P; Cao, Guangzhi; Bouman, Charles A
2010-01-01
We investigate three algorithms that use the sparse matrix transform (SMT) to produce variance-maximizing linear projections to a lower-dimensional space. The SMT expresses the projection as a sequence of Givens rotations and this enables computationally efficient implementation of the projection operator. The baseline algorithm uses the SMT to directly approximate the optimal solution that is given by principal components analysis (PCA). A variant of the baseline begins with a standard SMT solution, but prunes the sequence of Givens rotations to only include those that contribute to the variance maximization. Finally, a simpler and faster third algorithm is introduced; this also estimates the projection operator with a sequence of Givens rotations, but in this case, the rotations are chosen to optimize a criterion that more directly expresses the dimension reduction criterion.
Williams, Paul T.
2002-12-21
Context and Objective: Vigorous exercise, alcohol and weight loss are all known to increase HDL-cholesterol, however, it is not known whether these interventions raise low HDL as effectively as has been demonstrated for normal HDL. Design: Physician-supplied medical data from 7,288 male and 2,359 female runners were divided into five strata according to their self-reported usual running distance, reported alcohol intake, body mass index (BMI) or waist circumference. Within each stratum, the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles for HDL-cholesterol were then determined. Bootstrap resampling of least-squares regression was applied to determine the cross-sectional relationships between these factors and each percentile of the HDL-cholesterol distribution. Results: In both sexes, the rise in HDL-cholesterol per unit of vigorous exercise or alcohol intake was at least twice as great at the 95th percentile as at the 5th percentile of the HDL-distribution. There was also a significant graded increase in the slopes relating exercise (km run) and alcohol intake to HDL between the 5th and the 95th percentile. Men's HDL-cholesterol decreased in association with fatness (BMI and waist circumference) more sharply at the 95th than at the 5th percentile of the HDL-distribution. Conclusions: Although exercise, alcohol and adiposity were all related to HDL-cholesterol, the elevation in HDL per km run or ounce of alcohol consumed, and reduction in HDL per kg of body weight (men only), was least when HDL was low and greatest when HDL was high. These cross-sectional relationships support the hypothesis that men and women who have low HDL-cholesterol will be less responsive to exercise and alcohol (and weight loss in men) as compared to those who have high HDL-cholesterol.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Sensitivity of Satellite-Retrieved Cloud Properties to the Effective Variance of Cloud Droplet Size Distribution R.F. Arduini Science Applications International Corporation Hampton, Virginia P. Minnis and W.L. Smith, Jr. National Aeronautics and Space Administration Langley Research Center Hampton, Virginia J.K. Ayers and M.M. Khaiyer Analytical Services and Materials, Inc. P. Heck Coorperative Institute for Mesoscale Meteorological Studies/ University of Wisconsin-Madison Madison, Wisconsin
Effect of noise on the standard mapping
Karney, C.F.F.; Rechester, A.B.; White, R.B.
1981-03-01
The effect of a small amount of noise on the standard mapping is considered. Whenever the standard mapping possesses accelerator models (where the action increases approximately linearly with time), the diffusion coefficient contains a term proportional to the reciprocal of the variance of the noise term. At large values of the stochasticity parameter, the accelerator modes exhibit a universal behavior. As a result the dependence of the diffusion coefficient on stochasticity parameter also shows some universal behavior.
Stevens, L.; Hooks, D; Migliori, A
2010-01-01
Elastic tensors for organic molecular crystals vary significantly among different measurements. To understand better the origin of these differences, Brillouin scattering and resonant ultrasound spectroscopy measurements were made on the same specimen for single crystal pentaerythritol tetranitrate. The results differ significantly despite mitigation of sample-dependent contributions to errors. The frequency dependence and vibrational modes probed for both measurements are discussed in relation to the observed tensor variance.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatoryepistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Part II - The effect of data on waste behaviour: The South African waste information system
Godfrey, Linda; Scott, Dianne; Difford, Mark; Trois, Cristina
2012-11-15
Highlights: Black-Right-Pointing-Pointer This empirical study explores the relationship between data and resultant waste knowledge. Black-Right-Pointing-Pointer The study shows that 'Experience, Data and Theory' account for 54.1% of the variance in knowledge. Black-Right-Pointing-Pointer A strategic framework for Municipalities emerged from this study. - Abstract: Combining the process of learning and the theory of planned behaviour into a new theoretical framework provides an opportunity to explore the impact of data on waste behaviour, and consequently on waste management, in South Africa. Fitting the data to the theoretical framework shows that there are only three constructs which have a significant effect on behaviour, viz experience, knowledge, and perceived behavioural control (PBC). Knowledge has a significant influence on all three of the antecedents to behavioural intention (attitude, subjective norm and PBC). However, it is PBC, and not intention, that has the greatest influence on waste behaviour. While respondents may have an intention to act, this intention does not always manifest as actual waste behaviour, suggesting limited volitional control. The theoretical framework accounts for 53.7% of the variance in behaviour, suggesting significant external influences on behaviour not accounted for in the framework. While the theoretical model remains the same, respondents in public and private organisations represent two statistically significant sub-groups in the data set. The theoretical framework accounts for 47.8% of the variance in behaviour of respondents in public waste organisations and 57.6% of the variance in behaviour of respondents in private organisations. The results suggest that respondents in public and private waste organisations are subject to different structural forces that shape knowledge, intention, and resultant waste behaviour.
A Comparison of Image Quality Evaluation Techniques for Transmission X-Ray Microscopy
Bolgert, Peter J; /Marquette U. /SLAC
2012-08-31
Beamline 6-2c at Stanford Synchrotron Radiation Lightsource (SSRL) is capable of Transmission X-ray Microscopy (TXM) at 30 nm resolution. Raw images from the microscope must undergo extensive image processing before publication. Since typical data sets normally contain thousands of images, it is necessary to automate the image processing workflow as much as possible, particularly for the aligning and averaging of similar images. Currently we align images using the 'phase correlation' algorithm, which calculates the relative offset of two images by multiplying them in the frequency domain. For images containing high frequency noise, this algorithm will align noise with noise, resulting in a blurry average. To remedy this we multiply the images by a Gaussian function in the frequency domain, so that the algorithm ignores the high frequency noise while properly aligning the features of interest (FOI). The shape of the Gaussian is manually tuned by the user until the resulting average image is sharpest. To automatically optimize this process, it is necessary for the computer to evaluate the quality of the average image by quantifying its sharpness. In our research we explored two image sharpness metrics, the variance method and the frequency threshold method. The variance method uses the variance of the image as an indicator of sharpness while the frequency threshold method sums up the power in a specific frequency band. These metrics were tested on a variety of test images, containing both real and artificial noise. To apply these sharpness metrics, we designed and built a MATLAB graphical user interface (GUI) called 'Blur Master.' We found that it is possible for blurry images to have a large variance if they contain high amounts of noise. On the other hand, we found the frequency method to be quite reliable, although it is necessary to manually choose suitable limits for the frequency band. Further research must be performed to design an algorithm which automatically selects these parameters.
Measuring skewness of red blood cell deformability distribution by laser ektacytometry
Nikitin, S Yu; Priezzhev, A V; Lugovtsov, A E; Ustinov, V D
2014-08-31
An algorithm is proposed for measuring the parameters of red blood cell deformability distribution based on laser diffractometry of red blood cells in shear flow (ektacytometry). The algorithm is tested on specially prepared samples of rat blood. In these experiments we succeeded in measuring the mean deformability, deformability variance and skewness of red blood cell deformability distribution with errors of 10%, 15% and 35%, respectively. (laser biophotonics)
MEASURING X-RAY VARIABILITY IN FAINT/SPARSELY SAMPLED ACTIVE GALACTIC NUCLEI
Allevato, V.; Paolillo, M.; Papadakis, I.; Pinto, C.
2013-07-01
We study the statistical properties of the normalized excess variance of variability process characterized by a ''red-noise'' power spectral density (PSD), as in the case of active galactic nuclei (AGNs). We perform Monte Carlo simulations of light curves, assuming both a continuous and a sparse sampling pattern and various signal-to-noise ratios (S/Ns). We show that the normalized excess variance is a biased estimate of the variance even in the case of continuously sampled light curves. The bias depends on the PSD slope and on the sampling pattern, but not on the S/N. We provide a simple formula to account for the bias, which yields unbiased estimates with an accuracy better than 15%. We show that the normalized excess variance estimates based on single light curves (especially for sparse sampling and S/N < 3) are highly uncertain (even if corrected for bias) and we propose instead the use of an ''ensemble estimate'', based on multiple light curves of the same object, or on the use of light curves of many objects. These estimates have symmetric distributions, known errors, and can also be corrected for biases. We use our results to estimate the ability to measure the intrinsic source variability in current data, and show that they could also be useful in the planning of the observing strategy of future surveys such as those provided by X-ray missions studying distant and/or faint AGN populations and, more in general, in the estimation of the variability amplitude of sources that will result from future surveys such as Pan-STARRS and LSST.
Statistical assessment of Monte Carlo distributional tallies
Kiedrowski, Brian C; Solomon, Clell J
2010-12-09
Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.
PP Approval Memo from Patricia Worthington | Department of Energy
Office of Environmental Management (EM)
PP Approval Memo from Patricia Worthington PP Approval Memo from Patricia Worthington PP Approval Memo from Patricia Worthington PDF icon PP Approval Memo from Patricia Worthington More Documents & Publications Memorandum Request for Concurrence on firee Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory Voluntary Protection Program Onsite Review, Facility Engineering Services KCP, LLC - November 2008 Voluntary Protection
Microsoft Word - SEC J_Appendix O - Program Mgt and Cost Reports and Government_Negotiated
National Nuclear Security Administration (NNSA)
O, Page 1 SECTION J APPENDIX O PROGRAM MANAGEMENT AND COST REPORTS The Contractor shall submit periodic cost, schedule, and technical performance plans and reports in such form and substance as required by the Contracting Officer. Reference Section J, Appendix A, Statement of Work, Chapter I, 4.2. Cost reports will include at a minimum: 1. Monthly general management reports to summarize schedule, labor, and cost plans and status, and provide explanations of status variances from plans. The
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Parametric Behaviors of CLUBB in Simulations of Low Clouds in the Community Atmosphere Model (CAM)
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Gettelman, A.; Zhou, Tianjun
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained by the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. This study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.
Entropy vs. energy waveform processing: A comparison based on the heat equation
Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; Marsh, Jon N.; Wickline, Samuel A.
2015-05-25
Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.
Entropy vs. energy waveform processing: A comparison based on the heat equation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; Marsh, Jon N.; Wickline, Samuel A.
2015-05-25
Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be definedmore » as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.« less
The annual cycle in the tropical Pacific Ocean based on assimilated ocean data from 1983 to 1992
Smith, T.M.; Chelliah, M.
1995-06-01
An analysis of the tropical Pacific Ocean from January 1983 to December 1992 is used to describe the annual cycle, with the main focus on subsurface temperature variations. Some analysis of ocean-current variations are also considered. Monthly mean fields are generated by assimilation of surface and subsurface temperature observations from ships and buoys. Comparisons with observations show that the analysis reasonably describes large-scale ocean thermal variations. Ocean currents are not assimilated and do not compare as well with observations. However, the ocean-current variations in the analysis are qualitatively similar to the known variations given by others. The authors use harmonic analysis to separate the mean annual cycle and estimate its contribution to total variance. The analysis shows that in most regions the annual cycle of subsurface thermal variations is larger than surface variations and that these variations are associated with changes in the depth of the thermocline. The annual cycle accounts for most of the total surface variance poleward of about 10{degrees} latitude but accounts for much less surface and subsurface total variance near the equator. Large subsurface annual cycles occur near 10{degrees}N associated with shifts of the intertropical convergence zone and along the equator associated with the annual cycle of equatorial wind stress. The hemispherically asymmetric depths of the 20{degrees}C isotherms indicate that the large Southern Hemisphere warm pool, which extends to near the equator, may play an important role in thermal variations on the equator. 51 refs., 18 figs., 1 tab.
Lifestyle Factors in U.S. Residential Electricity Consumption
Sanquist, Thomas F.; Orr, Heather M.; Shui, Bin; Bittner, Alvah C.
2012-03-30
A multivariate statistical approach to lifestyle analysis of residential electricity consumption is described and illustrated. Factor analysis of selected variables from the 2005 U.S. Residential Energy Consumption Survey (RECS) identified five lifestyle factors reflecting social and behavioral choices associated with air conditioning, laundry usage, personal computer usage, climate zone of residence, and TV use. These factors were also estimated for 2001 RECS data. Multiple regression analysis using the lifestyle factors yields solutions accounting for approximately 40% of the variance in electricity consumption for both years. By adding the associated household and market characteristics of income, local electricity price and access to natural gas, variance accounted for is increased to approximately 54%. Income contributed only {approx}1% unique variance to the 2005 and 2001 models, indicating that lifestyle factors reflecting social and behavioral choices better account for consumption differences than income. This was not surprising given the 4-fold range of energy use at differing income levels. Geographic segmentation of factor scores is illustrated, and shows distinct clusters of consumption and lifestyle factors, particularly in suburban locations. The implications for tailored policy and planning interventions are discussed in relation to lifestyle issues.
Teleportation of squeezing: Optimization using non-Gaussian resources
Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio; Adesso, Gerardo
2010-12-15
We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. De Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; Wang, Hailong; Wang, Minghuai; Zhao, Chun
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.
Demonstration of Data Center Energy Use Prediction Software
Coles, Henry; Greenberg, Steve; Tschudi, William
2013-09-30
This report documents a demonstration of a software modeling tool from Romonet that was used to predict energy use and forecast energy use improvements in an operating data center. The demonstration was conducted in a conventional data center with a 15,500 square foot raised floor and an IT equipment load of 332 kilowatts. It was cooled using traditional computer room air handlers and a compressor-based chilled water system. The data center also utilized an uninterruptible power supply system for power conditioning and backup. Electrical energy monitoring was available at a number of locations within the data center. The software modeling tool predicted the energy use of the data center?s cooling and electrical power distribution systems, as well as electrical energy use and heat removal for the site. The actual energy used by the computer equipment was recorded from power distribution devices located at each computer equipment row. The model simulated the total energy use in the data center and supporting infrastructure and predicted energy use at energy-consuming points throughout the power distribution system. The initial predicted power levels were compared to actual meter readings and were found to be within approximately 10 percent at a particular measurement point, resulting in a site overall variance of 4.7 percent. Some variances were investigated, and more accurate information was entered into the model. In this case the overall variance was reduced to approximately 1.2 percent. The model was then used to predict energy use for various modification opportunities to the data center in successive iterations. These included increasing the IT equipment load, adding computer room air handler fan speed controls, and adding a water-side economizer. The demonstration showed that the software can be used to simulate data center energy use and create a model that is useful for investigating energy efficiency design changes.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; et al
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Consequences of proposed changes to Clean Water Act thermal discharge requirements
Veil, J.A.; Moses, D.O.
1995-12-31
This paper summarizes three studies that examined the economic and environmental impact on the power industry of (1) limiting thermal mixing zones to 1,000 feet, and (2) eliminating the Clean Water Act (CWA) {section}316(1) variance. Both of these proposed changes were included in S. 1081, a 1991 Senate bill to reauthorize the CWA. The bill would not have provided for grandfathering plants already using the variance or mixing zones larger than 1000 feet. Each of the two changes to the existing thermal discharge requirements were independently evaluated. Power companies were asked what they would do if these two changes were imposed. Most plants affected by the proposed changes would retrofit cooling towers and some would retrofit diffusers. Assuming that all affected plants would proportionally follow the same options as the surveyed plants, the estimated capital cost of retrofitting cooling towers or diffusers at all affected plants ranges from $21.4 to 24.4 billion. Both cooling towers and diffusers exert a 1%-5.8% energy penalty on a plant`s output. Consequently, the power companies must generate additional power if they install those technologies. The estimated cost of the additional power ranges from $10 to 18.4 billion over 20 years. Generation of the extra power would emit over 8 million tons per year of additional carbon dioxide. Operation of the new cooling towers would cause more than 1.5 million gallons per minute of additional evaporation. Neither the restricted mixing zone size nor the elimination of the {section}316(1) variance was adopted into law. More recent proposed changes to the Clean Water Act have not included either of these provisions, but in the future, other Congresses might attempt to reintroduce these types of changes.
Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.
2015-12-23
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.
Nole, Michael; Daigle, Hugh; Mohanty, Kishore; Cook, Ann; Hillman, Jess
2015-12-15
We have developed a 3D methane hydrate reservoir simulator to model marine methane hydrate systems. Our simulator couples highly nonlinear heat and mass transport equations and includes heterogeneous sedimentation, in-situ microbial methanogenesis, the influence of pore size contrast on solubility gradients, and the impact of salt exclusion from the hydrate phase on dissolved methane equilibrium in pore water. Using environmental parameters from Walker Ridge in the Gulf of Mexico, we first simulate hydrate formation in and around a thin, dipping, planar sand stratum surrounded by clay lithology as it is buried to 295mbsf. We find that with sufficient methane being supplied by organic methanogenesis in the clays, a 200x pore size contrast between clays and sands allows for a strong enough concentration gradient to significantly drop the concentration of methane hydrate in clays immediately surrounding a thin sand layer, a phenomenon that is observed in well log data. Building upon previous work, our simulations account for the increase in sand-clay solubility contrast with depth from about 1.6% near the top of the sediment column to 8.6% at depth, which leads to a progressive strengthening of the diffusive flux of methane with time. By including an exponentially decaying organic methanogenesis input to the clay lithology with depth, we see a decrease in the aqueous methane supplied to the clays surrounding the sand layer with time, which works to further enhance the contrast in hydrate saturation between the sand and surrounding clays. Significant diffusive methane transport is observed in a clay interval of about 11m above the sand layer and about 4m below it, which matches well log observations. The clay-sand pore size contrast alone is not enough to completely eliminate hydrate (as observed in logs), because the diffusive flux of aqueous methane due to a contrast in pore size occurs slower than the rate at which methane is supplied via organic methanogenesis. Therefore, it is likely that additional mechanisms are at play, notably bound water activity reduction in clays. Three-dimensionality allows for inclusion of lithologic heterogeneities, which focus fluid flow and subsequently allow for heterogeneity in the methane migration mechanisms that dominate in marine sediments at a local scale. Incorporating recently acquired 3D seismic data from Walker Ridge to inform the lithologic structure of our modeled reservoir, we show that even with deep adjective sourcing of methane along highly permeable pathways, local hydrate accumulations can be sourced either by diffusive or advective methane flux; advectively-sourced hydrates accumulate evenly in highly permeable strata, while diffusively-sourced hydrates are characterized by thin strata-bound intervals with high clay-sand pore size contrasts.
Statistical techniques for characterizing residual waste in single-shell and double-shell tanks
Jensen, L., Fluor Daniel Hanford
1997-02-13
A primary objective of the Hanford Tank Initiative (HTI) project is to develop methods to estimate the inventory of residual waste in single-shell and double-shell tanks. A second objective is to develop methods to determine the boundaries of waste that may be in the waste plume in the vadose zone. This document presents statistical sampling plans that can be used to estimate the inventory of analytes within the residual waste within a tank. Sampling plans for estimating the inventory of analytes within the waste plume in the vadose zone are also presented. Inventory estimates can be used to classify the residual waste with respect to chemical and radiological hazards. Based on these estimates, it will be possible to make decisions regarding the final disposition of the residual waste. Four sampling plans for the residual waste in a tank are presented. The first plan is based on the assumption that, based on some physical characteristic, the residual waste can be divided into disjoint strata, and waste samples obtained from randomly selected locations within each stratum. The second plan is that waste samples are obtained from randomly selected locations within the waste. The third and fourth plans are similar to the first two, except that composite samples are formed from multiple samples. Common to the four plans is that, in the laboratory, replicate analytical measurements are obtained from homogenized waste samples. The statistical sampling plans for the residual waste are similar to the statistical sampling plans developed for the tank waste characterization program. In that program, the statistical sampling plans required multiple core samples of waste, and replicate analytical measurements from homogenized core segments. A statistical analysis of the analytical data, obtained from use of the statistical sampling plans developed for the characterization program or from the HTI project, provide estimates of mean analyte concentrations and confidence intervals on the mean. In addition, the statistical analysis provides estimates of spatial and measurement variabilities. The magnitude of these sources of variability are used to determine how well the inventory of the analytes in the waste have been estimated. This document provides statistical sampling plans that can be used to estimate the inventory of the analytes in the residual waste in single-shell and double-shell tanks and in the waste plume in the vadose zone.
Safety criteria for organic watch list tanks at the Hanford Site
Meacham, J.E., Westinghouse Hanford
1996-08-01
This document reviews the hazards associated with the storage of organic complexant salts in Hanford Site high-level waste single- shell tanks. The results of this analysis were used to categorize tank wastes as safe, unconditionally safe, or unsafe. Sufficient data were available to categorize 67 tanks; 63 tanks were categorized as safe, and four tanks were categorized as conditionally safe. No tanks were categorized as unsafe. The remaining 82 SSTs lack sufficient data to be categorized.Historic tank data and an analysis of variance model were used to prioritize the remaining tanks for characterization.
Implementation of a TMP Advanced Quality Control System at a Newsprint Manufacturing Plant
Sebastien Kidd
2006-02-14
This project provided for the implementation of an advanced, model predictive multi-variant controller that works with the mill that has existing distributed control system. The method provides real time and online predictive models and modifies control actions to maximize quality and minimize energy costs. Using software sensors, the system can predict difficult-to-measure quality and process variables and make necessary process control decisions to accurately control pulp quality while minimizing electrical usage. This method of control has allowed Augusta Newsprint Company to optimize the operation of its Thermo Mechanical Pulp mill for lower energy consumption and lower pulp quality variance.
Gamouras, A.; Britton, M.; Khairy, M. M.; Mathew, R.; Hall, K. C.; Dalacu, D.; Poole, P.; Poitras, D.; Williams, R. L.
2013-12-16
We demonstrate the selective optical excitation and detection of subsets of quantum dots (QDs) within an InAs/InP ensemble using a SiO{sub 2}/Ta{sub 2}O{sub 5}-based optical microcavity. The low variance of the exciton transition energy and dipole moment tied to the narrow linewidth of the microcavity mode is expected to facilitate effective qubit encoding and manipulation in a quantum dot ensemble with ease of quantum state readout relative to qubits encoded in single quantum dots.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
COST MANAGEMENT REPORT Page of DOE F 1332.9# FORM APPROVED (11-84) OMB NO. 1910-1400 1. TITLE 2. REPORTING PERIOD 3. IDENTIFICATION NUMBER 4. PARTICIPANT NAME AND ADDRESS 5. COST PLAN DATE 6. START DATE 7. COMPLETION DATE 8. ELEMENT 9. REPORTING ELEMENT 10. ACCRUED COSTS 11. ESTIMATED ACCRUED COSTS 12. 13. CODE Total Contract Variance Labor Reporting Period Cumulative to Date Balance c. d. Fiscal e. a. Subse- quent Reporting Period Total of Fiscal Year (1) (2) (3) Years to Completion a. Actual
Arbanas, Goran; Dunn, Michael E; Larson, Nancy M; Leal, Luiz C; Williams, Mark L
2012-01-01
Convergence properties of Legendre expansion of a Doppler-broadened double-differential elastic neutron scattering cross section of {sup 238}U near the 6.67 eV resonance at temperature 10{sup 3} K are studied. A variance of Legendre expansion from a reference Monte Carlo computation is used as a measure of convergence and is computed for as many as 15 terms in the Legendre expansion. When the outgoing energy equals the incoming energy, it is found that the Legendre expansion converges very slowly. Therefore, a supplementary method of computing many higher-order terms is suggested and employed for this special case.
In-Situ Real Time Monitoring and Control of Mold Making and Filling Processes: Final Report
Mohamed Abdelrahman; Kenneth Currie
2010-12-22
This project presents a model for addressing several objectives envisioned by the metal casting industries through the integration of research and educational components. It provides an innovative approach to introduce technologies for real time characterization of sand molds, lost foam patterns and monitoring of the mold filling process. The technology developed will enable better control over the casting process. It is expected to reduce scrap and variance in the casting quality. A strong educational component is integrated into the research plan to utilize increased awareness of the industry professional, the potential benefits of the developed technology, and the potential benefits of cross cutting technologies.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Seasonal Case Studies Reveal Significant Variance in Large-Scale Forcing Data Submitter: Xie, S., Lawrence Livermore National Laboratory Area of Research: General Circulation and Single Column Models/Parameterizations Working Group(s): Cloud Modeling Journal Reference: Xie, S, R.T Cederwall, M. Zhang, and J.J. Yio, Comparison of SCM and CSRM forcing data derived from the ECMWF model and from objective analysis at the ARM SGP site, J. Geophys. Res., 108(D16), 4499, doi:10.1029/2003JD003541, 2003.
Transport Test Problems for Hybrid Methods Development
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Quality Work Plan Checklist and Resources - Section 1
Quality Work Plan Checklist and Resources - Section 1 State staff can use this list of questions and related resources to help implement the WAP Quality Work Plan. Each question includes reference to where in 15-4 the guidance behind the question is found, and where in the 2015 Application Package you will describe the answers to DOE. App Section 15-4 Section Question Yes No Resources V.5.1 1 Are you on track to submit current field guides and standards, including any necessary variance
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Schilling, Oleg; Mueschke, Nicholas J.
2010-10-18
Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmoreand destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. These results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.less
Initial Evidence for Self-Organized Criticality in Electric Power System Blackouts
Carreras, B.A.; Dobson, I.; Newman, D.E.; Poole, A.B.
2000-01-04
We examine correlations in a time series of electric power system blackout sizes using scaled window variance analysis and R/S statistics. The data shows some evidence of long time correlations and has Hurst exponent near 0.7. Large blackouts tend to correlate with further large blackouts after a long time interval. Similar effects are also observed in many other complex systems exhibiting self-organized criticality. We discuss this initial evidence and possible explanations for self-organized criticality in power systems blackouts. Self-organized criticality, if fully confirmed in power systems, would suggest new approaches to understanding and possibly controlling blackouts.
Machine protection system for rotating equipment and method
Lakshminarasimha, Arkalgud N. (Marietta, GA); Rucigay, Richard J. (Marietta, GA); Ozgur, Dincer (Kennesaw, GA)
2003-01-01
A machine protection system and method for rotating equipment introduces new alarming features and makes use of full proximity probe sensor information, including amplitude and phase. Baseline vibration amplitude and phase data is estimated and tracked according to operating modes of the rotating equipment. Baseline vibration and phase data can be determined using a rolling average and variance and stored in a unit circle or tracked using short term average and long term average baselines. The sensed vibration amplitude and phase is compared with the baseline vibration amplitude and phase data. Operation of the rotating equipment can be controlled based on the vibration amplitude and phase.
Modelling of volatility in monetary transmission mechanism
Dobešová, Anna; Klepáč, Václav; Kolman, Pavel; Bednářová, Petra
2015-03-10
The aim of this paper is to compare different approaches to modeling of volatility in monetary transmission mechanism. For this purpose we built time-varying parameter VAR (TVP-VAR) model with stochastic volatility and VAR-DCC-GARCH model with conditional variance. The data from three European countries are included in the analysis: the Czech Republic, Germany and Slovakia. Results show that VAR-DCC-GARCH system captures higher volatility of observed variables but main trends and detected breaks are generally identical in both approaches.
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V.; Smith, Richard D.
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation
Fan, Xuetong
2000-08-20
Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
2# SUMMARY REPORT FORM APPROVED (11-84) OMB NO. 1910-1400 1. IDENTIFICATION NUMBER 2. PROGRAM/ PROJECT TITLE 3. REPORTING PERIOD 4. PARTICIPANT NAME AND ADDRESS 5. START DATE 6. COMPLETION DATE 7. FY 8. MONTHS 9. COST STATUS a. $ Expressed in: b. Budget and Reporting No. c. Cost Plan Date d. Actual Costs Prior Years e. Planned Costs Prior Years f. Total Estimated Cost for Contract g. Total Contract Value h. Estimated Subsequent Reporting Period Accrued Costs g. Planned h. Actual i. Variance j.
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (OSTI)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less
Inflationary power asymmetry from primordial domain walls
Jazayeri, Sadra; Akrami, Yashar; Firouzjahi, Hassan; Solomon, Adam R.; Wang, Yi E-mail: yashar.akrami@astro.uio.no E-mail: a.r.solomon@damtp.cam.ac.uk
2014-11-01
We study the asymmetric primordial fluctuations in a model of inflation in which translational invariance is broken by a domain wall. We calculate the corrections to the power spectrum of curvature perturbations; they are anisotropic and contain dipole, quadrupole, and higher multipoles with non-trivial scale-dependent amplitudes. Inspired by observations of these multipole asymmetries in terms of two-point correlations and variance in real space, we demonstrate that this model can explain the observed anomalous power asymmetry of the cosmic microwave background (CMB) sky, including its characteristic feature that the dipole dominates over higher multipoles. We test the viability of the model and place approximate constraints on its parameters by using observational values of dipole, quadrupole, and octopole amplitudes of the asymmetry measured by a local-variance estimator. We find that a configuration of the model in which the CMB sphere does not intersect the domain wall during inflation provides a good fit to the data. We further derive analytic expressions for the corrections to the CMB temperature covariance matrix, or angular power spectra, which can be used in future statistical analysis of the model in spherical harmonic space.
Fingerprints of anomalous primordial Universe on the abundance of large scale structures
Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: abolhasani@ipm.ir E-mail: MohammadHossein.Namjoo@utdallas.edu
2014-12-01
We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that the abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A}<0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for M
Verification of theoretically computed spectra for a point rotating in a vertical plane
Powell, D.C.; Connell, J.R.; George, R.L.
1985-03-01
A theoretical model is modified and tested that produces the power spectrum of the alongwind component of turbulence as experienced by a point rotating in a vertical plane perpendicular to the mean wind direction. The ability to generate such a power spectrum, independent of measurement, is important in wind turbine design and testing. The radius of the circle of rotation, its height above the ground, and the rate of rotation are typical for those for a MOD-OA wind turbine. Verification of this model is attempted by comparing two sets of variances that correspond to individual harmonic bands of spectra of turbulence in the rotational frame. One set of variances is calculated by integrating the theoretically generated rotational spectra; the other is calculated by integrating rotational spectra from real data analysis. The theoretical spectrum is generated by Fourier transformation of an autocorrelation function taken from von Karman and modified for the rotational frame. The autocorrelation is based on dimensionless parameters, each of which incorporates both atmospheric and wind turbine parameters. The real data time series are formed by sampling around the circle of anemometers of the Vertical Plane Array at the former MOD-OA site at Clayton, New Mexico.
Martin, N.G.; Nightingale, B.; Whitfield, J.B.
1994-09-01
There is much interest in the detection of quantitative trait loci (QTL) - major genes which affect quantitative phenotypes. The relationship of polymorphism at known alcohol metabolizing enzyme loci to alcohol pharmacokinetics is a good model system. The three class I alcohol dehydrogenase genes are clustered on chromosome 4 and protein electrophoresis has revealed polymorphisms at the ADH2 and ADH3 loci. While different activities of the isozymes have been demonstrated in vitro, little work has been done in trying to relate ADH polymorphism to variation in ethanol metabolism in vivo. We previously measured ethanol metabolism and psychomotor reactivity in 206 twin pairs and demonstrated that most of the repeatable variation was genetic. We have now recontacted the twins to obtain DNA samples and used PCR with allele specific primers to type the ADH2 and ADH3 polymorphisms in 337 individual twins. FISHER has been used to estimate fixed effects of typed polymorphisms simultaneously with remaining linked and unlinked genetic variance. The ADH2*1-2 genotypes metabolize ethanol faster and attain a lower peak blood alcohol concentration than the more common ADH2*1-1 genotypes, although less than 3% of the variance is accounted for. There is no effect of ADH3 genotype. However, sib-pair linkage analysis suggests that there is a linked polymorphism which has a much greater effect on alcohol metabolism that those typed here.
URBAN WOOD/COAL CO-FIRING IN THE BELLEFIELD BOILERPLANT
James T. Cobb, Jr.; Gene E. Geiger; William W. Elder III; William P. Barry; Jun Wang; Hongming Li
2001-08-21
During the third quarter, important preparatory work was continued so that the experimental activities can begin early in the fourth quarter. Authorization was awaited in response to the letter that was submitted to the Allegheny County Health Department (ACHD) seeking an R&D variance for the air permit at the Bellefield Boiler Plant (BBP). Verbal authorizations were received from the Pennsylvania Department of Environmental Protection (PADEP) for R&D variances for solid waste permits at the J. A. Rutter Company (JARC), and Emery Tree Service (ETS). Construction wood was acquired from Thompson Properties and Seven D Corporation. Forty tons of pallet and construction wood were ground to produce BioGrind Wood Chips at JARC and delivered to Mon Valley Transportation Company (MVTC). Five tons of construction wood were milled at ETS and half of the product delivered to MVTC. Discussions were held with BBP and Energy Systems Associates (ESA) about the test program. Material and energy balances on Boiler No.1 and a plan for data collection were prepared. Presentations describing the University of Pittsburgh Wood/Coal Co-Firing Program were provided to the Pittsburgh Chapter of the Pennsylvania Society of Professional Engineers, and the Upgraded Coal Interest Group and the Biomass Interest Group (BIG) of the Electric Power Research Institute (EPRI). An article describing the program appeared in the Pittsburgh Post-Gazette. An application was submitted for authorization for a Pennsylvania Switchgrass Energy and Conservation Program.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.
Characterization and estimation of permeability correlation structure from performance data
Ershaghi, I.; Al-Qahtani, M.
1997-08-01
In this study, the influence of permeability structure and correlation length on the system effective permeability and recovery factors of 2-D cross-sectional reservoir models, under waterflood, is investigated. Reservoirs with identical statistical representation of permeability attributes are shown to exhibit different system effective permeability and production characteristics which can be expressed by a mean and variance. The mean and variance are shown to be significantly influenced by the correlation length. Detailed quantification of the influence of horizontal and vertical correlation lengths for different permeability distributions is presented. The effect of capillary pressure, P{sub c1} on the production characteristics and saturation profiles at different correlation lengths is also investigated. It is observed that neglecting P{sub c} causes considerable error at large horizontal and short vertical correlation lengths. The effect of using constant as opposed to variable relative permeability attributes is also investigated at different correlation lengths. Next we studied the influence of correlation anisotropy in 2-D reservoir models. For a reservoir under five-spot waterflood pattern, it is shown that the ratios of breakthrough times and recovery factors of the wells in each direction of correlation are greatly influenced by the degree of anisotropy. In fully developed fields, performance data can aid in the recognition of reservoir anisotropy. Finally, a procedure for estimating the spatial correlation length from performance data is presented. Both the production performance data and the system`s effective permeability are required in estimating the correlation length.
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
Ghosh, Somnath; Friedrich, Rainer
2015-05-15
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. The filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Mearns, L. O.; Sain, Steve; Leung, Lai-Yung R.; Bukovsky, M. S.; McGinnis, Seth; Biner, S.; Caya, Daniel; Arritt, R.; Gutowski, William; Takle, Eugene S.; Snyder, Mark A.; Jones, Richard; Nunes, A M B.; Tucker, S.; Herzmann, D.; McDaniel, Larry; Sloan, Lisa
2013-10-01
We investigate major results of the NARCCAP multiple regional climate model (RCM) experiments driven by multiple global climate models (GCMs) regarding climate change for seasonal temperature and precipitation over North America. We focus on two major questions: How do the RCM simulated climate changes differ from those of the parent GCMs and thus affect our perception of climate change over North America, and how important are the relative contributions of RCMs and GCMs to the uncertainty (variance explained) for different seasons and variables? The RCMs tend to produce stronger climate changes for precipitation: larger increases in the northern part of the domain in winter and greater decreases across a swath of the central part in summer, compared to the four GCMs driving the regional models as well as to the full set of CMIP3 GCM results. We pose some possible process-level mechanisms for the difference in intensity of change, particularly for summer. Detailed process-level studies will be necessary to establish mechanisms and credibility of these results. The GCMs explain more variance for winter temperature and the RCMs for summer temperature. The same is true for precipitation patterns. Thus, we recommend that future RCM-GCM experiments over this region include a balanced number of GCMs and RCMs.
Method and system for turbomachinery surge detection
Faymon, David K.; Mays, Darrell C.; Xiong, Yufei
2004-11-23
A method and system for surge detection within a gas turbine engine, comprises: measuring the compressor discharge pressure (CDP) of the gas turbine over a period of time; determining a time derivative (CDP.sub.D ) of the measured (CDP) correcting the CDP.sub.D for altitude, (CDP.sub.DCOR); estimating a short-term average of CDP.sub.DCOR.sup.2 ; estimating a short-term average of CDP.sub.DCOR ; and determining a short-term variance of corrected CDP rate of change (CDP.sub.roc) based upon the short-term average of CDP.sub.DCOR and the short-term average of CDP.sub.DCOR.sup.2. The method and system then compares the short-term variance of corrected CDP rate of change with a pre-determined threshold (CDP.sub.proc) and signals an output when CDP.sub.roc >CDP.sub.proc. The method and system provides a signal of a surge within the gas turbine engine when CDP.sub.roc remains>CDP.sub.proc for pre-determined period of time.
Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em
2013-03-01
We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a deterministic part and a stochastic part. Numerical results verify the StratonovichEuler and ItoEuler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.
Optimal Solar PV Arrays Integration for Distributed Generation
Omitaomu, Olufemi A; Li, Xueping
2012-01-01
Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.
Time-variability of NO{sub x} emissions from Portland cement kilns
Walters, L.J. Jr.; May, M.S. III [PSM International, Dallas, TX (United States)] [PSM International, Dallas, TX (United States); Johnson, D.E. [Kansas State Univ., Manhattan, KS (United States). Dept. of Statistics] [Kansas State Univ., Manhattan, KS (United States). Dept. of Statistics; MacMann, R.S. [Penta Engineering, St. Louis, MO (United States)] [Penta Engineering, St. Louis, MO (United States); Woodward, W.A. [Southern Methodist Univ., Dallas, TX (United States). Dept. of Statistics] [Southern Methodist Univ., Dallas, TX (United States). Dept. of Statistics
1999-03-01
Due to the presence of autocorrelation between sequentially measured nitrogen oxide (NO{sub x}) concentrations in stack gas from portland cement kilns, the determination of the average emission rates and the uncertainty of the average has been improperly calculated by the industry and regulatory agencies. Documentation of permit compliance, establishment of permit levels, and the development and testing of control techniques for reducing NO{sub x} emissions at specific cement plants requires accurate and precise statistical estimates of parameters such as means, standard deviations, and variances. Usual statistical formulas such as for the variance of the sample mean only apply if sequential measurements of NO{sub x} emissions are independent. Significant autocorrelation of NO{sub x} emission measurements revealed that NO{sub x} concentration values measured by continuous emission monitors are not independent but can be represented by an autoregressive, moving average time series. Three orders of time-variability of NO{sub x} emission rates were determined from examination of continuous emission measurements from several cement kilns.
A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO{sub 2}
Jones, M.H.
1999-11-24
To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO{sub 2} levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO{sub 2}-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP-072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO{sub 2}-exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO{sub 2}-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS{reg_sign} and Fortran codes to read the ASCII data file). The data files and this documentation are available without charge on a variety of media and via the Internet from CDIAC.
McFerran, John J.; Luiten, Andre N. [School of Physics, University of Western Australia, 35 Stirling Highway, Crawley 6009, W.A. (Australia)
2010-02-15
We demonstrate a means of increasing the signal-to-noise ratio in a Ramsey-Borde interferometer with spatially separated oscillatory fields on a thermal atomic beam. The {sup 1}S{sub 0}{r_reversible}{sup 3}P{sub 1} intercombination line in neutral {sup 40}Ca is used as a frequency discriminator, with an extended cavity diode laser at 423 nm probing the ground state population after a Ramsey-Borde sequence of 657 nm light-field interactions with the atoms. Evaluation of the instability of the Ca frequency reference is carried out by comparison with (i) a hydrogen-maser and (ii) a cryogenic sapphire oscillator. In the latter case the Ca reference exhibits a square-root {Lambda} variance of 9.2x10{sup -14} at 1 s and 2.0x10{sup -14} at 64 s. This is an order-of-magnitude improvement for optical beam frequency references, to our knowledge. The shot noise of the readout fluorescence produces a limiting square-root {Lambda} variance of 7x10{sup -14}/{radical}({tau}), highlighting the potential for improvement. This work demonstrates the feasibility of a portable frequency reference in the optical domain with 10{sup -14} range frequency instability.
Comfort and HVAC Performance for a New Construction Occupied Test House in Roseville, California
Burdick, A.
2013-10-01
K. Hovnanian® Homes constructed a 2,253-ft2 single-story slab-on-grade ranch house for an occupied test house (new construction) in Roseville, California. One year of monitoring and analysis focused on the effectiveness of the space conditioning system at maintaining acceptable temperature and relative humidity levels in several rooms of the home, as well as room-to-room differences and the actual measured energy consumption by the space conditioning system. In this home, the air handler unit (AHU) and ducts were relocated to inside the thermal boundary. The AHU was relocated from the attic to a mechanical closet, and the ductwork was located inside an insulated and air-sealed bulkhead in the attic. To describe the performance and comfort in the home, the research team selected representative design days and extreme days from the annual data for analysis. To ensure that temperature differences were within reasonable occupant expectations, the team followed Air Conditioning Contractors of America guidance. At the end of the monitoring period, the occupant of the home had no comfort complaints in the home. Any variance between the modeled heating and cooling energy and the actual amounts used can be attributed to the variance in temperatures at the thermostat versus the modeled inputs.
Comfort and HVAC Performance for a New Construction Occupied Test House in Roseville, California
Burdick, A.
2013-10-01
K. Hovnanian(R) Homes(R) constructed a 2,253-ft2 single-story slab-on-grade ranch house for an occupied test house (new construction) in Roseville, California. One year of monitoring and analysis focused on the effectiveness of the space conditioning system at maintaining acceptable temperature and relative humidity levels in several rooms of the home, as well as room-to-room differences and the actual measured energy consumption by the space conditioning system. In this home, the air handler unit (AHU) and ducts were relocated to inside the thermal boundary. The AHU was relocated from the attic to a mechanical closet, and the ductwork was located inside an insulated and air-sealed bulkhead in the attic. To describe the performance and comfort in the home, the research team selected representative design days and extreme days from the annual data for analysis. To ensure that temperature differences were within reasonable occupant expectations, the team followed Air Conditioning Contractors of America guidance. At the end of the monitoring period, the occupant of the home had no comfort complaints in the home. Any variance between the modeled heating and cooling energy and the actual amounts used can be attributed to the variance in temperatures at the thermostat versus the modeled inputs.
Hickey, R.
1992-09-01
The objective of this project was to develop and test an early-warning/process control model for anaerobic sludge digestion (AD). The approach was to use batch and semi-continuously fed systems and to assemble system parameter data on a real-time basis. Specific goals were to produce a real-time early warning control model and computer code, tested for internal and external validity; to determine the minimum rate of data collection for maximum lag time to predict failure with a prescribed accuracy and confidence in the prediction; and to determine and characterize any trends in the real-time data collected in response to particular perturbations to feedstock quality. Trends in the response of trace gases carbon monoxide and hydrogen in batch experiments, were found to depend on toxicant type. For example, these trace gases respond differently for organic substances vs. heavy metals. In both batch and semi-continuously feed experiments, increased organic loading lead to proportionate increases in gas production rates as well as increases in CO and H{sub 2} concentration. An analysis of variance of gas parameters confirmed that CO was the most sensitive indicator variable by virtue of its relatively larger variance compared to the others. The other parameters evaluated including gas production, methane production, hydrogen, carbon monoxide, carbon dioxide and methane concentration. In addition, a relationship was hypothesized between gaseous CO concentration and acetate concentrations in the digester. The data from semicontinuous feed experiments were supportive.
Piehowski, Paul D.; Petyuk, Vladislav A.; Orton, Daniel J.; Xie, Fang; Moore, Ronald J.; Ramirez Restrepo, Manuel; Engel, Anzhelika; Lieberman, Andrew P.; Albin, Roger L.; Camp, David G.; Smith, Richard D.; Myers, Amanda J.
2013-05-03
To design a robust quantitative proteomics study, an understanding of both the inherent heterogeneity of the biological samples being studied as well as the technical variability of the proteomics methods and platform is needed. Additionally, accurately identifying the technical steps associated with the largest variability would provide valuable information for the improvement and design of future processing pipelines. We present an experimental strategy that allows for a detailed examination of the variability of the quantitative LC-MS proteomics measurements. By replicating analyses at different stages of processing, various technical components can be estimated and their individual contribution to technical variability can be dissected. This design can be easily adapted to other quantitative proteomics pipelines. Herein, we applied this methodology to our label-free workflow for the processing of human brain tissue. For this application, the pipeline was divided into four critical components: Tissue dissection and homogenization (extraction), protein denaturation followed by trypsin digestion and SPE clean-up (digestion), short-term run-to-run instrumental response fluctuation (instrumental variance), and long-term drift of the quantitative response of the LC-MS/MS platform over the 2 week period of continuous analysis (instrumental stability). From this analysis, we found the following contributions to variability: extraction (72%) >> instrumental variance (16%) > instrumental stability (8.4%) > digestion (3.1%). Furthermore, the stability of the platform and its suitability for discovery proteomics studies is demonstrated.
Exploiting Genetic Variation of Fiber Components and Morphology in Juvenile Loblolly Pine
Chang, Hou-Min; Kadia, John F.; Li, Bailian; Sederoff, Ron
2005-06-30
In order to ensure the global competitiveness of the Pulp and Paper Industry in the Southeastern U.S., more wood with targeted characteristics have to be produced more efficiently on less land. The objective of the research project is to provide a molecular genetic basis for tree breeding of desirable traits in juvenile loblolly pine, using a multidisciplinary research approach. We developed micro analytical methods for determine the cellulose and lignin content, average fiber length, and coarseness of a single ring in a 12 mm increment core. These methods allow rapid determination of these traits in micro scale. Genetic variation and genotype by environment interaction (GxE) were studied in several juvenile wood traits of loblolly pine (Pinus taeda L.). Over 1000 wood samples of 12 mm increment cores were collected from 14 full-sib families generated by a 6-parent half-diallel mating design (11-year-old) in four progeny tests. Juvenile (ring 3) and transition (ring 8) for each increment core were analyzed for cellulose and lignin content, average fiber length, and coarseness. Transition wood had higher cellulose content, longer fiber and higher coarseness, but lower lignin than juvenile wood. General combining ability variance for the traits in juvenile wood explained 3 to 10% of the total variance, whereas the specific combining ability variance was negligible or zero. There were noticeable full-sib family rank changes between sites for all the traits. This was reflected in very high specific combining ability by site interaction variances, which explained from 5% (fiber length) to 37% (lignin) of the total variance. Weak individual-tree heritabilities were found for cellulose, lignin content and fiber length at the juvenile and transition wood, except for lignin at the transition wood (0.23). Coarseness had moderately high individual-tree heritabilities at both the juvenile (0.39) and transition wood (0.30). Favorable genetic correlations of volume and stem straightness were found with cellulose content, fiber length and coarseness, suggesting that selection on growth or stem straightness would results in favorable response in chemical wood traits. We have developed a series of methods for application of functional genomics to understanding the molecular basis of traits important to tree breeding for improved chemical and physical properties of wood. Two types of technologies were used, microarray analysis of gene expression, and profiling of soluble metabolites from wood forming tissues. We were able to correlate wood property phenotypes with expression of specific genes and with the abundance of specific metabolites using a new database and appropriate statistical tools. These results implicate a series of candidate genes for cellulose content, lignin content, hemicellulose content and specific extractible metabolites. Future work should integrate such studies in mapping populations and genetic maps to make more precise associations of traits with gene locations in order to increase the predictive power of molecular markers, and to distinguish between different candidate genes associated by linkage or by function. This study has found that loblolly pine families differed significantly for cellulose yield, fiber length, fiber coarseness, and less for lignin content. The implication for forest industry is that genetic testing and selection for these traits is possible and practical. With sufficient genetic variation, we could improve cellulose yield, fiber length, fiber coarseness, and reduce lignin content in Loblolly pine. With the continued progress in molecular research, some candidate genes may be used for selecting cellulose content, lignin content, hemicellulose content and specific extractible metabolites. This would accelerate current breeding and testing program significantly, and produce pine plantations with not only high productivity, but desirable wood properties as well.
Efficacy of fixed filtration for rapid kVp-switching dual energy x-ray systems
Yao, Yuan; Wang, Adam S.; Pelc, Norbert J.; Department of Radiology, Stanford University, Stanford, California 94305; Department of Electrical Engineering, Stanford University, Stanford, California 94305
2014-03-15
Purpose: Dose efficiency of dual kVp imaging can be improved if the two beams are filtered to remove photons in the common part of their spectra, thereby increasing spectral separation. While there are a number of advantages to rapid kVp-switching for dual energy, it may not be feasible to have two different filters for the two spectra. Therefore, the authors are interested in whether a fixed added filter can improve the dose efficiency of kVp-switching dual energy x-ray systems. Methods: The authors hypothesized that a K-edge filter would provide the energy selectivity needed to remove overlap of the spectra and hence increase the precision of material separation at constant dose. Preliminary simulations were done using calcium and water basis materials and 80 and 140 kVp x-ray spectra. Precision of the decomposition was evaluated based on the propagation of the Poisson noise through the decomposition function. Considering availability and cost, the authors chose a commercial Gd{sub 2}O{sub 2}S screen as the filter for their experimental validation. Experiments were conducted on a table-top system using a phantom with various thicknesses of acrylic and copper and 70 and 125 kVp x-ray spectra. The authors kept the phantom exposure roughly constant with and without filtration by adjusting the tube current. The filtered and unfiltered raw data of both low and high energy were decomposed into basis material and the variance of the decomposition for each thickness pair was calculated. To evaluate the filtration performance, the authors measured the ratio of material decomposition variance with and without filtration. Results: Simulation results show that the ideal filter material depends on the object composition and thickness, and ranges across the lanthanide series, with higher atomic number filters being preferred for more attenuating objects. Variance reduction increases with filter thickness, and substantial reductions (40%) can be achieved with a 2 loss in intensity. The authors experimental results validate the simulations, yet were overall slightly worse than expectation. For large objects, conventional (non-K-edge) beam hardening filters perform well. Conclusions: This study demonstrates the potential of fixed K-edge filtration to improve the dose efficiency and material decomposition precision for rapid kVp-switching dual energy systems.
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
Risner, Joel M; Johnson, Seth R; Remec, Igor; Bekar, Kursat B
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst s insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portions of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.
Zhao, Chun; Liu, Xiaohong; Qian, Yun; Yoon, Jin-Ho; Hou, Zhangshuan; Lin, Guang; McFarlane, Sally A.; Wang, Hailong; Yang, Ben; Ma, Po-Lun; Yan, Huiping; Bao, Jie
2013-11-08
In this study, we investigated the sensitivity of net radiative fluxes (FNET) at the top of atmosphere (TOA) to 16 selected uncertain parameters mainly related to the cloud microphysics and aerosol schemes in the Community Atmosphere Model version 5 (CAM5). We adopted a quasi-Monte Carlo (QMC) sampling approach to effectively explore the high dimensional parameter space. The output response variables (e.g., FNET) were simulated using CAM5 for each parameter set, and then evaluated using generalized linear model analysis. In response to the perturbations of these 16 parameters, the CAM5-simulated global annual mean FNET ranges from -9.8 to 3.5 W m-2 compared to the CAM5-simulated FNET of 1.9 W m-2 with the default parameter values. Variance-based sensitivity analysis was conducted to show the relative contributions of individual parameter perturbation to the global FNET variance. The results indicate that the changes in the global mean FNET are dominated by those of cloud forcing (CF) within the parameter ranges being investigated. The size threshold parameter related to auto-conversion of cloud ice to snow is confirmed as one of the most influential parameters for FNET in the CAM5 simulation. The strong heterogeneous geographic distribution of FNET variation shows parameters have a clear localized effect over regions where they are acting. However, some parameters also have non-local impacts on FNET variance. Although external factors, such as perturbations of anthropogenic and natural emissions, largely affect FNET variations at the regional scale, their impact is weaker than that of model internal parameters in terms of simulating global mean FNET in this study. The interactions among the 16 selected parameters contribute a relatively small portion of the total FNET variations over most regions of the globe. This study helps us better understand the CAM5 model behavior associated with parameter uncertainties, which will aid the next step of reducing model uncertainty via calibration of uncertain model parameters with the largest sensitivity.
Shirodkar, P.V. Mesquita, A.; Pradhan, U.K.; Verlekar, X.N.; Babu, M.T.; Vethamony, P.
2009-04-15
Water quality parameters (temperature, pH, salinity, DO, BOD, suspended solids, nutrients, PHc, phenols, trace metals-Pb, Cd and Hg, chlorophyll-a (chl-a) and phaeopigments) and the sediment quality parameters (total phosphorous, total nitrogen, organic carbon and trace metals) were analysed from samples collected at 15 stations along 3 transects off Karnataka coast (Mangalore harbour in the south to Suratkal in the north), west coast of India during 2007. The analyses showed high ammonia off Suratkal, high nitrite (NO{sub 2}-N) and nitrate (NO{sub 3}-N) in the nearshore waters off Kulai and high nitrite (NO{sub 2}-N) and ammonia (NH{sub 3}-N) in the harbour area. Similarly, high petroleum hydrocarbon (PHc) values were observed near the harbour, while phenols remained high in the nearshore waters of Kulai and Suratkal. Significantly, high concentrations of cadmium and mercury with respect to the earlier studies were observed off Kulai and harbour regions, respectively. R-mode varimax factor analyses were applied separately to surface and bottom water data sets due to existing stratification in the water column caused by riverine inflow and to sediment data. This helped to understand the interrelationships between the variables and to identify probable source components for explaining the environmental status of the area. Six factors (each for surface and bottom waters) were found responsible for variance (86.9% in surface and 82.4% in bottom) in the coastal waters between Mangalore and Suratkal. In sediments, 4 factors explained 86.8% of the observed total variance. The variances indicated addition of nutrients and suspended solids to the coastal waters due to weathering and riverine transport and are categorized as natural sources. The observed contamination of coastal waters indicated anthropogenic inputs of Cd and phenol from industrial effluent sources at Kulai and Suratkal, ammonia from wastewater discharges off Kulai and harbour, PHc and Hg from boat traffic and harbour activities of New Mangalore harbour. However, the strong seasonal currents and the seasonal winds keep the coastal waters well mixed and aerated, which help to disperse the contaminants, without significantly affecting chlorophyll-a concentrations. The interrelationship between the stations as shown by cluster analyses and depicted in dendograms, categorize the contamination levels sector-wise.
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-11-15
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Real-Time Active Cosmic Neutron Background Reduction Methods
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Mitchell, Stephen; Guss, Paul
2013-09-01
Neutron counting using large arrays of pressurized 3He proportional counters from an aerial system or in a maritime environment suffers from the background counts from the primary cosmic neutrons and secondary neutrons caused by cosmic ray?induced mechanisms like spallation and charge-exchange reaction. This paper reports the work performed at the Remote Sensing LaboratoryAndrews (RSL-A) and results obtained when using two different methods to reduce the cosmic neutron background in real time. Both methods used shielding materials with a high concentration (up to 30% by weight) of neutron-absorbing materials, such as natural boron, to remove the low-energy neutron flux from the cosmic background as the first step of the background reduction process. Our first method was to design, prototype, and test an up-looking plastic scintillator (BC-400, manufactured by Saint Gobain Corporation) to tag the cosmic neutrons and then create a logic pulse of a fixed time duration (~120 ?s) to block the data taken by the neutron counter (pressurized 3He tubes running in a proportional counter mode). The second method examined the time correlation between the arrival of two successive neutron signals to the counting array and calculated the excess of variance (Feynman variance Y2F)1 in the neutron count distribution from Poisson distribution. The dilution of this variance from cosmic background values ideally would signal the presence of man-made neutrons.2 The first method has been technically successful in tagging the neutrons in the cosmic-ray flux and preventing them from being counted in the 3He tube array by electronic vetofield measurement work shows the efficiency of the electronic veto counter to be about 87%. The second method has successfully derived an empirical relationship between the percentile non-cosmic component in a neutron flux and the Y2F of the measured neutron count distribution. By using shielding materials alone, approximately 55% of the neutron flux from man-made sources like 252Cf or Am-Be was removed.
Scaling impacts on environmental controls and spatial heterogeneity of soil organic carbon stocks
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mishra, U.; Riley, W. J.
2015-01-27
The spatial heterogeneity of land surfaces affects energy, moisture, and greenhouse gas exchanges with the atmosphere. However, representing heterogeneity of terrestrial hydrological and biogeochemical processes in earth system models (ESMs) remains a critical scientific challenge. We report the impact of spatial scaling on environmental controls, spatial structure, and statistical properties of soil organic carbon (SOC) stocks across the US state of Alaska. We used soil profile observations and environmental factors such as topography, climate, land cover types, and surficial geology to predict the SOC stocks at a 50 m spatial scale. These spatially heterogeneous estimates provide a dataset with reasonablemore » fidelity to the observations at a sufficiently high resolution to examine the environmental controls on the spatial structure of SOC stocks. We upscaled both the predicted SOC stocks and environmental variables from finer to coarser spatial scales (s = 100, 200, 500 m, 1, 2, 5, 10 km) and generated various statistical properties of SOC stock estimates. We found different environmental factors to be statistically significant predictors at different spatial scales. Only elevation, temperature, potential evapotranspiration, and scrub land cover types were significant predictors at all scales. The strengths of control (the median value of geographically weighted regression coefficients) of these four environmental variables on SOC stocks decreased with increasing scale and were accurately represented using mathematical functions (R2 = 0.83–0.97). The spatial structure of SOC stocks across Alaska changed with spatial scale. Although the variance (sill) and unstructured variability (nugget) of the calculated variograms of SOC stocks decreased exponentially with scale, the correlation length (range) remained relatively constant across scale. The variance of predicted SOC stocks decreased with spatial scale over the range of 50 to ~ 500 m, and remained constant beyond this scale. The fitted exponential function accounted for 98% of variability in the variance of SOC stocks. We found moderately-accurate linear relationships between mean and higher-order moments of predicted SOC stocks (R2 ~ 0.55–0.63). Current ESMs operate at coarse spatial scales (50–100 km), and are therefore unable to represent environmental controllers and spatial heterogeneity of high-latitude SOC stocks consistent with observations. We conclude that improved understanding of the scaling behavior of environmental controls and statistical properties of SOC stocks can improve ESM land model benchmarking and perhaps allow representation of spatial heterogeneity of biogeochemistry at scales finer than those currently resolved by ESMs.« less
Scaling impacts on environmental controls and spatial heterogeneity of soil organic carbon stocks
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mishra, U.; Riley, W. J.
2015-07-02
The spatial heterogeneity of land surfaces affects energy, moisture, and greenhouse gas exchanges with the atmosphere. However, representing the heterogeneity of terrestrial hydrological and biogeochemical processes in Earth system models (ESMs) remains a critical scientific challenge. We report the impact of spatial scaling on environmental controls, spatial structure, and statistical properties of soil organic carbon (SOC) stocks across the US state of Alaska. We used soil profile observations and environmental factors such as topography, climate, land cover types, and surficial geology to predict the SOC stocks at a 50 m spatial scale. These spatially heterogeneous estimates provide a data setmore » with reasonable fidelity to the observations at a sufficiently high resolution to examine the environmental controls on the spatial structure of SOC stocks. We upscaled both the predicted SOC stocks and environmental variables from finer to coarser spatial scales (s = 100, 200, and 500 m and 1, 2, 5, and 10 km) and generated various statistical properties of SOC stock estimates. We found different environmental factors to be statistically significant predictors at different spatial scales. Only elevation, temperature, potential evapotranspiration, and scrub land cover types were significant predictors at all scales. The strengths of control (the median value of geographically weighted regression coefficients) of these four environmental variables on SOC stocks decreased with increasing scale and were accurately represented using mathematical functions (R2 = 0.83–0.97). The spatial structure of SOC stocks across Alaska changed with spatial scale. Although the variance (sill) and unstructured variability (nugget) of the calculated variograms of SOC stocks decreased exponentially with scale, the correlation length (range) remained relatively constant across scale. The variance of predicted SOC stocks decreased with spatial scale over the range of 50 m to ~ 500 m, and remained constant beyond this scale. The fitted exponential function accounted for 98 % of variability in the variance of SOC stocks. We found moderately accurate linear relationships between mean and higher-order moments of predicted SOC stocks (R2 ∼ 0.55–0.63). Current ESMs operate at coarse spatial scales (50–100 km), and are therefore unable to represent environmental controllers and spatial heterogeneity of high-latitude SOC stocks consistent with observations. We conclude that improved understanding of the scaling behavior of environmental controls and statistical properties of SOC stocks could improve ESM land model benchmarking and perhaps allow representation of spatial heterogeneity of biogeochemistry at scales finer than those currently resolved by ESMs.« less
Temporary Cementitious Sealers in Enhanced Geothermal Systems
Sugama T.; Pyatina, T.; Butcher, T.; Brothers, L.; Bour, D.
2011-12-31
Unlike conventional hydrothennal geothermal technology that utilizes hot water as the energy conversion resources tapped from natural hydrothermal reservoir located at {approx}10 km below the ground surface, Enhanced Geothermal System (EGS) must create a hydrothermal reservoir in a hot rock stratum at temperatures {ge}200 C, present in {approx}5 km deep underground by employing hydraulic fracturing. This is the process of initiating and propagating a fracture as well as opening pre-existing fractures in a rock layer. In this operation, a considerable attention is paid to the pre-existing fractures and pressure-generated ones made in the underground foundation during drilling and logging. These fractures in terms of lost circulation zones often cause the wastage of a substantial amount of the circulated water-based drilling fluid or mud. Thus, such lost circulation zones must be plugged by sealing materials, so that the drilling operation can resume and continue. Next, one important consideration is the fact that the sealers must be disintegrated by highly pressured water to reopen the plugged fractures and to promote the propagation of reopened fractures. In response to this need, the objective of this phase I project in FYs 2009-2011 was to develop temporary cementitious fracture sealing materials possessing self-degradable properties generating when {ge} 200 C-heated scalers came in contact with water. At BNL, we formulated two types of non-Portland cementitious systems using inexpensive industrial by-products with pozzolanic properties, such as granulated blast-furnace slag from the steel industries, and fly ashes from coal-combustion power plants. These byproducts were activated by sodium silicate to initiate their pozzolanic reactions, and to create a cemetitious structure. One developed system was sodium silicate alkali-activated slag/Class C fly ash (AASC); the other was sodium silicate alkali-activated slag/Class F fly ash (AASF) as the binder of temper-try sealers. Two specific additives without sodium silicate as alkaline additive were developed in this project: One additive was the sodium carboxymethyl cellulose (CMC) as self-degradation promoting additive; the other was the hard-burned magnesium oxide (MgO) made from calcinating at 1,000-1,500 C as an expansive additive. The AASC and AASF cementitious sealers made by incorporating an appropriate amount of these additives met the following six criteria: 1) One dry mix component product; 2) plastic viscosity, 20 to 70 cp at 300 rpm; 3) maintenance of pumpability for at least 1 hour at 85 C; 4) compressive strength >2000 psi; 5) self-degradable by injection with water at a certain pressure; and 6) expandable and swelling properties; {ge}0.5% of total volume of the sealer.
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Fabrication of FCC-SiO{sub 2} colloidal crystals using the vertical convective self-assemble method
Castaeda-Uribe, O. A.; Salcedo-Reyes, J. C.; Mndez-Pinzn, H. A.; Pedroza-Rodrguez, A. M.
2014-05-15
In order to determine the optimal conditions for the growth of high-quality 250 nm-SiO{sub 2} colloidal crystals by the vertical convective self-assemble method, the Design of Experiments (DoE) methodology is applied. The influence of the evaporation temperature, the volume fraction, and the pH of the colloidal suspension is studied by means of an analysis of variance (ANOVA) in a 3{sup 3} factorial design. Characteristics of the stacking lattice of the resulting colloidal crystals are determined by scanning electron microscopy and angle-resolved transmittance spectroscopy. Quantitative results from the statistical test show that the temperature is the most critical factor influencing the quality of the colloidal crystal, obtaining highly ordered structures with FCC stacking lattice at a growth temperature of 40C.
Calculational method for determination of carburetor icing rate
Nazarov, V.I.; Emel'yanov, V.E.; Gonopol'ska, A.F.; Zaslavskii, A.A.
1986-05-01
This paper investigates the dependence of the carburetor icing rate on the density, distillation curve, and vapor pressure of gasoline. More than 100 gasoline samples, covering a range of volatility, were investigated. No clear-cut relationship can be observed between the carburetor icing rate and any specific property index of the gasoline. At the same time, there are certain variables that cannot be observed directly but can be interpreted readily through which the influence of gasoline quality on the carburetor icing rate can be explained. The conversion to these variables was accomplished with regard for the values of the variance and correlation of the carburetor icing rate. Equations are presented that may be used to predict the carburetor icing rate when using gasolines differing in quality. The equations can also determine the need for incorporating antiicing additives in the gasoline.
LIFE ESTIMATION OF HIGH LEVEL WASTE TANK STEEL FOR F-TANK FARM CLOSURE PERFORMANCE ASSESSMENT - 9310
Subramanian, K; Bruce Wiersma, B; Stephen Harris, S
2009-01-12
High level radioactive waste (HLW) is stored in underground carbon steel storage tanks at the Savannah River Site. The underground tanks will be closed by removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations, and severing/sealing external penetrations. The life of the carbon steel materials of construction in support of the performance assessment has been completed. The estimation considered general and localized corrosion mechanisms of the tank steel exposed to grouted conditions. A stochastic approach was followed to estimate the distributions of failures based upon mechanisms of corrosion accounting for variances in each of the independent variables. The methodology and results used for one-type of tank is presented.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Aab, Alexander
2014-12-31
We report a study of the distributions of the depth of maximum, Xmax, of extensive air-shower profiles with energies above 1017.8 eV as observed with the fluorescence telescopes of the Pierre Auger Observatory. The analysis method for selecting a data sample with minimal sampling bias is described in detail as well as the experimental cross-checks and systematic uncertainties. Furthermore, we discuss the detector acceptance and the resolution of the Xmax measurement and provide parametrizations thereof as a function of energy. Finally, the energy dependence of the mean and standard deviation of the Xmax distributions are compared to air-shower simulations formore » different nuclear primaries and interpreted in terms of the mean and variance of the logarithmic mass distribution at the top of the atmosphere.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pilania, G.; Gubernatis, J. E.; Lookman, T.
2015-12-03
The role of dynamical (or Born effective) charges in classification of octet AB-type binary compounds between four-fold (zincblende/wurtzite crystal structures) and six-fold (rocksalt crystal structure) coordinated systems is discussed. We show that the difference in the dynamical charges of the fourfold and sixfold coordinated structures, in combination with Harrison’s polarity, serves as an excellent feature to classify the coordination of 82 sp–bonded binary octet compounds. We use a support vector machine classifier to estimate the average classification accuracy and the associated variance in our model where a decision boundary is learned in a supervised manner. Lastly, we compare the out-of-samplemore » classification accuracy achieved by our feature pair with those reported previously.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
The Atacama Cosmology Telescope: cross correlation with Planck maps
Louis, Thibaut; Calabrese, Erminia; Dunkley, Joanna; Næss, Sigurd; Addison, Graeme E.; Hincks, Adam D.; Hasselfield, Matthew; Hlozek, Renée; Bond, J. Richard; Hajian, Amir; Das, Sudeep; Devlin, Mark J.; Dünner, Rolando; Infante, Leopoldo; Gralla, Megan; Marriage, Tobias A.; Huffenberger, Kevin; Kosowsky, Arthur; Moodley, Kavilan; Niemack, Michael D.; and others
2014-07-01
We present the temperature power spectrum of the Cosmic Microwave Background obtained by cross-correlating maps from the Atacama Cosmology Telescope (ACT) at 148 and 218 GHz with maps from the Planck satellite at 143 and 217 GHz, in two overlapping regions covering 592 square degrees. We find excellent agreement between the two datasets at both frequencies, quantified using the variance of the residuals between the ACT power spectra and the ACT × Planck cross-spectra. We use these cross-correlations to measure the calibration of the ACT data at 148 and 218 GHz relative to Planck, to 0.7% and 2% precision respectively. We find no evidence for anisotropy in the calibration parameter. We compare the Planck 353 GHz power spectrum with the measured amplitudes of dust and cosmic infrared background (CIB) of ACT data at 148 and 218 GHz. We also compare planet and point source measurements from the two experiments.
Unconventional Fermi surface in an insulating state
Harrison, Neil; Tan, B. S.; Hsu, Y. -T.; Zeng, B.; Hatnean, M. Ciomaga; Zhu, Z.; Hartstein, M.; Kiourlappou, M.; Srivastava, A.; Johannes, M. D.; Murphy, T. P.; Park, J. -H.; Balicas, L.; Lonzarich, G. G.; Balakrishnan, G.; Sebastian, Suchitra E.
2015-07-17
Insulators occur in more than one guise; a recent finding was a class of topological insulators, which host a conducting surface juxtaposed with an insulating bulk. Here, we report the observation of an unusual insulating state with an electrically insulating bulk that simultaneously yields bulk quantum oscillations with characteristics of an unconventional Fermi liquid. We present quantum oscillation measurements of magnetic torque in high-purity single crystals of the Kondo insulator SmB_{6}, which reveal quantum oscillation frequencies characteristic of a large three-dimensional conduction electron Fermi surface similar to the metallic rare earth hexaborides such as PrB_{6} and LaB_{6}. As a result, the quantum oscillation amplitude strongly increases at low temperatures, appearing strikingly at variance with conventional metallic behavior.
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Studies of Cosmic Ray Composition and Air Shower Structure with the Pierre Auger Observatory
Abraham, : J.; Abreu, P.; Aglietta, M.; Aguirre, C.; Ahn, E.J.; Allard, D.; Allekotte, I.; Allen, J.; Alvarez-Muniz, J.; Ambrosio, M.; Anchordoqui, L.
2009-06-01
These are presentations to be presented at the 31st International Cosmic Ray Conference, in Lodz, Poland during July 2009. It consists of the following presentations: (1) Measurement of the average depth of shower maximum and its fluctuations with the Pierre Auger Observatory; (2) Study of the nuclear mass composition of UHECR with the surface detectors of the Pierre Auger Observatory; (3) Comparison of data from the Pierre Auger Observatory with predictions from air shower simulations: testing models of hadronic interactions; (4) A Monte Carlo exploration of methods to determine the UHECR composition with the Pierre Auger Observatory; (5) The delay of the start-time measured with the Pierre Auger Observatory for inclined showers and a comparison of its variance with models; (6) UHE neutrino signatures in the surface detector of the Pierre Auger Observatory; and (7) The electromagnetic component of inclined air showers at the Pierre Auger Observatory.
Characterizing cemented TRU waste for RCRA hazardous constituents
Yeamans, D.R.; Betts, S.E.; Bodenstein, S.A. [and others
1996-06-01
Los Alamos National Laboratory (LANL) has characterized drums of solidified transuranic (TRU) waste from four major waste streams. The data will help the State of New Mexico determine whether or not to issue a no-migration variance of the Waste Isolation Pilot Plant (WIPP) so that WIPP can receive and dispose of waste. The need to characterize TRU waste stored at LANL is driven by two additional factors: (1) the LANL RCRA Waste Analysis Plan for EPA compliant safe storage of hazardous waste; (2) the WIPP Waste Acceptance Criteria (WAC) The LANL characterization program includes headspace gas analysis, radioassay and radiography for all drums and solids sampling on a random selection of drums from each waste stream. Data are presented showing that the only identified non-metal RCRA hazardous component of the waste is methanol.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
“Lidar Investigations of Aerosol, Cloud, and Boundary Layer Properties Over the ARM ACRF Sites”
Ferrare, Richard; Turner, David
2015-01-13
Project goals; Characterize the aerosol and ice vertical distributions over the ARM NSA site, and in particular to discriminate between elevated aerosol layers and ice clouds in optically thin scattering layers; Characterize the water vapor and aerosol vertical distributions over the ARM Darwin site, how these distributions vary seasonally, and quantify the amount of water vapor and aerosol that is above the boundary layer; Use the high temporal resolution Raman lidar data to examine how aerosol properties vary near clouds; Use the high temporal resolution Raman lidar and Atmospheric Emitted Radiance Interferometer (AERI) data to quantify entrainment in optically thin continental cumulus clouds; and Use the high temporal Raman lidar data to continue to characterize the turbulence within the convective boundary layer and how the turbulence statistics (e.g., variance, skewness) is correlated with larger scale variables predicted by models.
On the local variation of the Hubble constant
Odderskov, Io; Hannestad, Steen [Department of Physics and Astronomy, University of Aarhus, DK-8000 Aarhus C (Denmark); Haugblle, Troels, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: troels.haugboelle@snm.ku.dk [Centre for Star and Planet Formation, Natural History Museum of Denmark and Niels Bohr Institute University of Copenhagen, DK-1350 Copenhagen (Denmark)
2014-10-01
We have carefully studied how local measurements of the Hubble constant, H{sub 0}, can be influenced by a variety of different parameters related to survey depth, size, and fraction of the sky observed, as well as observer position in space. Our study is based on N-body simulations of structure in the standard ?CDM model and our conclusion is that the expected variance in measurements of H{sub 0} is far too small to explain the current discrepancy between the low value of H{sub 0} inferred from measurements of the cosmic microwave background (CMB) by the Planck collaboration and the value measured directly in the local universe by use of Type Ia supernovae. This conclusion is very robust and does not change with different assumptions about effective sky coverage and depth of the survey or observer position in space.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B.
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
COMMENT ON TRITIUM ABSORPTION-DESORPTION CHARACTERISTICS OF LANI4.25AL0.75
Walters, T
2007-04-10
The thermodynamic data for LaNi{sub 4.25}Al{sub 0.75} tritide, reported by Wang et al. (W.-d. Wang et al., J. Alloys Compd. (2006) doi:10.1016/j.jallcom.206.09.122), are in variance with our published data. The plateau pressures for the P-C-T isotherms at all temperatures are significantly lower than published data. As a result, the derived thermodynamic parameters, {Delta}H{sup o} and {Delta}S{sup o}, are questionable. Using the thermodynamic parameters derived from the data reported by Wang et al. will result in under estimating the expected pressures, and therefore not provide the desired performance for storing and processing tritium.
Thermal properties of Ni-substituted LaCoO{sub 3} perovskite
Thakur, Rasna Thakur, Rajesh K. Gaur, N. K.; Srivastava, Archana
2014-04-24
With the objective of exploring the unknown thermodynamic behavior of LaCo{sub 1?x}Ni{sub x}O{sub 3} family, we present here an investigation of the temperature-dependent (10K ? T ? 300K) thermodynamic properties of LaCo{sub 1?x}Ni{sub x}O{sub 3} (x=0.1, 0.3, 0.5). The specific heat of LaCoO3 with Ni doping in the perovskite structure at B-site has been studied by means of a Modified Rigid Ion Model (MRIM). This replacement introduces large cation variance at B-site hence the specific heat increases appreciably. We report here probably for the first time the cohesive energy, Reststrahlen frequency (?) and Debye temperature (?{sub D}) of LaCo{sub 1?x}Ni{sub x}O{sub 3} compounds.
Enhanced pinning in mixed rare earth-123 films
Driscoll, Judith L.; Foltyn, Stephen R.
2009-06-16
An superconductive article and method of forming such an article is disclosed, the article including a substrate and a layer of a rare earth barium cuprate film upon the substrate, the rare earth barium cuprate film including two or more rare earth metals capable of yielding a superconductive composition where ion size variance between the two or more rare earth metals is characterized as greater than zero and less than about 10.times.10.sup.-4, and the rare earth barium cuprate film including two or more rare earth metals is further characterized as having an enhanced critical current density in comparison to a standard YBa.sub.2Cu.sub.3O.sub.y composition under identical testing conditions.
Dynamical mass generation in unquenched QED using the Dyson-Schwinger equations
Kızılersü, Ayse; Sizer, Tom; Pennington, Michael R.; Williams, Anthony G.; Williams, Richard
2015-03-13
We present a comprehensive numerical study of dynamical mass generation for unquenched QED in four dimensions, in the absence of four-fermion interactions, using the Dyson-Schwinger approach. We begin with an overview of previous investigations of criticality in the quenched approximation. To this we add an analysis using a new fermion-antifermion-boson interaction ansatz, the Kizilersu-Pennington (KP) vertex, developed for an unquenched treatment. After surveying criticality in previous unquenched studies, we investigate the performance of the KP vertex in dynamical mass generation using a renormalized fully unquenched system of equations. This we compare with the results for two hybrid vertices incorporating the Curtis-Pennington vertex in the fermion equation. We conclude that the KP vertex is as yet incomplete, and its relative gauge-variance is due to its lack of massive transverse components in its design.
McDowell, Allen K.; Ellefson, Mark D.; McDonald, Kent M.
2015-06-25
The treatment, shipping, and disposal of a highly radioactive radium/barium waste stream have presented a complex set of challenges requiring several years of effort. The project illustrates the difficulty and high cost of managing even small quantities of highly radioactive Resource Conservation and Recovery Act (RCRA)-regulated waste. Pacific Northwest National Laboratory (PNNL) research activities produced a Type B quantity of radium chloride low-level mixed waste (LLMW) in a number of small vials in a facility hot cell. The resulting waste management project involved a mock-up RCRA stabilization treatment, a failed in-cell treatment, a second, alternative RCRA treatment approach, coordinated regulatory variances and authorizations, alternative transportation authorizations, additional disposal facility approvals, and a final radiological stabilization process.
Brandt, Charles A. ); Becker, James M. ); Porta, Augusto C.
2001-12-01
Following a large blowout of crude oil in northern Italy in 1994, the distribution of polyaromatic hydrocarbons (PAHs) was examined over time and space in soils, uncultivated wild vegetation, insects, mice, and frogs in the area. Within 2 y of the blowout, PAH concentrations declined to background levels over much of the area where initial concentrations were within an order of magnitude above background, but had not declined to background in areas where starting concentrations exceeded background by two orders of magnitude. Octanol-water partitioning and extent of alkylation explained much of the variance in uptake of PAHs by plants and animals. Lower Kow PAHs and higher-alkylated PAHs had higher soil-to-biota accumulation factors (BSAFs) than did high-Kow and unalkylated forms. BSAFs for higher Kow PAHs were very low for plants, but much higher for animals, with frogs accumulating more of these compounds than other species.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Schilling, Oleg; Mueschke, Nicholas J.
2010-10-18
Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmore » and destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. Thus, these results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.« less
FLUOR HANFORD SAFETY MANAGEMENT PROGRAMS
GARVIN, L J; JENSEN, M A
2004-04-13
This document summarizes safety management programs used within the scope of the ''Project Hanford Management Contract''. The document has been developed to meet the format and content requirements of DOE-STD-3009-94, ''Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses''. This document provides summary descriptions of Fluor Hanford safety management programs, which Fluor Hanford nuclear facilities may reference and incorporate into their safety basis when producing facility- or activity-specific documented safety analyses (DSA). Facility- or activity-specific DSAs will identify any variances to the safety management programs described in this document and any specific attributes of these safety management programs that are important for controlling potentially hazardous conditions. In addition, facility- or activity-specific DSAs may identify unique additions to the safety management programs that are needed to control potentially hazardous conditions.
Pilania, G.; Gubernatis, J. E.; Lookman, T.
2015-12-03
The role of dynamical (or Born effective) charges in classification of octet AB-type binary compounds between four-fold (zincblende/wurtzite crystal structures) and six-fold (rocksalt crystal structure) coordinated systems is discussed. We show that the difference in the dynamical charges of the fourfold and sixfold coordinated structures, in combination with Harrison’s polarity, serves as an excellent feature to classify the coordination of 82 sp–bonded binary octet compounds. We use a support vector machine classifier to estimate the average classification accuracy and the associated variance in our model where a decision boundary is learned in a supervised manner. Lastly, we compare the out-of-sample classification accuracy achieved by our feature pair with those reported previously.
Sub-Poissonian statistics in order-to-chaos transition
Kryuchkyan, Gagik Yu. [Yerevan State University, Manookyan 1, Yerevan 375049, (Armenia); Institute for Physical Research, National Academy of Sciences, Ashtarak-2 378410, (Armenia); Manvelyan, Suren B. [Institute for Physical Research, National Academy of Sciences, Ashtarak-2 378410, (Armenia)
2003-07-01
We study the phenomena at the overlap of quantum chaos and nonclassical statistics for the time-dependent model of nonlinear oscillator. It is shown in the framework of Mandel Q parameter and Wigner function that the statistics of oscillatory excitation numbers is drastically changed in the order-to-chaos transition. The essential improvement of sub-Poissonian statistics in comparison with an analogous one for the standard model of driven anharmonic oscillator is observed for the regular operational regime. It is shown that in the chaotic regime, the system exhibits the range of sub-Poissonian and super-Poissonian statistics which alternate one to other depending on time intervals. Unusual dependence of the variance of oscillatory number on the external noise level for the chaotic dynamics is observed. The scaling invariance of the quantum statistics is demonstrated and its relation to dissipation and decoherence is studied.
Kalman filter data assimilation: Targeting observations and parameter estimation
Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Mixing in thermally stratified nonlinear spin-up with uniform boundary fluxes
Baghdasarian, Meline; Pacheco-Vega, Arturo; Pacheco, J. Rafael; Verzicco, Roberto
2014-09-15
Studies of stratified spin-up experiments in enclosed cylinders have reported the presence of small pockets of well-mixed fluids but quantitative measurements of the mixedness of the fluid has been lacking. Previous numerical simulations have not addressed these measurements. Here we present numerical simulations that explain how the combined effect of spin-up and thermal boundary conditions enhances or hinders mixing of a fluid in a cylinder. The energy of the system is characterized by splitting the potential energy into diabatic and adiabatic components, and measurements of efficiency of mixing are based on both, the ratio of dissipation of available potential energy to forcing and variance of temperature. The numerical simulations of the NavierStokes equations for the problem with different sets of thermal boundary conditions at the horizontal walls helped shed some light on the physical mechanisms of mixing, for which a clear explanation was absent.
The use of microdosimetric techniques in radiation protection measurements
Chen, J.; Hsu, H.H.; Casson, W.H.; Vasilik, D.G.
1997-01-01
A major objective of radiation protection is to determine the dose equivalent for routine radiation protection applications. As microdosimetry has developed over approximately three decades, its most important application has been in measuring radiation quality, especially in radiation fields of unknown or inadequately known energy spectra. In these radiation fields, determination of dose equivalent is not straightforward; however, the use of microdosimetric principles and techniques could solve this problem. In this paper, the authors discuss the measurement of lineal energy, a microscopic analog to linear energy transfer, and demonstrate the development and implementation of the variance-covariance method, a novel method in experimental microdosimetry. This method permits the determination of dose mean lineal energy, an essential parameter of radiation quality, in a radiation field of unknown spectrum, time-varying dose rate, and high dose rate. Real-time monitoring of changes in radiation quality can also be achieved by using microdosimetric techniques.
Ab initio molecular dynamics simulation of liquid water by quantum Monte Carlo
Zen, Andrea; Luo, Ye Mazzola, Guglielmo Sorella, Sandro; Guidoni, Leonardo
2015-04-14
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article, we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous density functional theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab initio simulations of complex chemical systems.
Identification of high shears and compressive discontinuities in the inner heliosphere
Greco, A.; Perri, S.
2014-04-01
Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed that at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.
High-precision calculation of the strange nucleon electromagnetic form factors
Green, Jeremy; Meinel, Stefan; Engelhardt, Michael G.; Krieg, Stefan; Laeuchli, Jesse; Negele, John W.; Orginos, Kostas; Pochinsky, Andrew; Syritsyn, Sergey
2015-08-26
We report a direct lattice QCD calculation of the strange nucleon electromagnetic form factors G^{s}_{E} and G^{s}_{M} in the kinematic range 0 ≤ Q^{2} ≤ 1.2GeV^{2}. For the first time, both G^{s}_{E} and G^{s}_{M} are shown to be nonzero with high significance. This work uses closer-to-physical lattice parameters than previous calculations, and achieves an unprecented statistical precision by implementing a recently proposed variance reduction technique called hierarchical probing. We perform model-independent fits of the form factor shapes using the z-expansion and determine the strange electric and magnetic radii and magnetic moment. As a result, we compare our results to parity-violating electron-proton scattering data and to other theoretical studies.
Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report
Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin
2014-12-01
This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled “Validated Models for Radiation Response and Signal Generation in Scintillators” (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).
Foo Kune, Denis; Mahadevan, Karthikeyan
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
Mller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Challenging the Mean Time to Failure: Measuring Dependability as a Mean Failure Cost
Sheldon, Frederick T; Mili, Ali
2009-01-01
many fronts: it ignores the variance in stakes among stakeholders; it fails to recognize the structure of complex specifications as the aggregate of overlapping requirements; it fails to recognize that different components of the specification carry different stakes, even for the same stakeholder; it fails to recognize that V&V actions have different impacts with respect to the different components of the specification. Similar metrics of security, such as MTTD (Mean Time to Detection) and MTTE (Mean Time to Exploitation) suffer from the same shortcomings. In this paper we advocate a measure of dependability that acknowledges the aggregate structureof complex system specifications, and takes into account variations by stakeholder, by specification components, and by V&V impact.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Zhang, W. F.; Nishimula, T.; Nagashio, K.; Kita, K.; Toriumi, A.
2013-03-11
We report a consistent conduction band offset (CBO) at a GeO{sub 2}/Ge interface determined by internal photoemission spectroscopy (IPE) and charge-corrected X-ray photoelectron spectroscopy (XPS). IPE results showed that the CBO value was larger than 1.5 eV irrespective of metal electrode and substrate type variance, while an accurate determination of valence band offset (VBO) by XPS requires a careful correction of differential charging phenomena. The VBO value was determined to be 3.60 {+-} 0.2 eV by XPS after charge correction, thus yielding a CBO (1.60 {+-} 0.2 eV) in excellent agreement with the IPE results. Such a large CBO (>1.5 eV) confirmed here is promising in terms of using GeO{sub 2} as a potential passivation layer for future Ge-based scaled CMOS devices.
Planck constraints on monodromy inflation
Easther, Richard; Flauger, Raphael E-mail: flauger@ias.edu
2014-02-01
We use data from the nominal Planck mission to constrain modulations in the primordial power spectrum associated with monodromy inflation. The largest improvement in fit relative to the unmodulated model has Δχ{sup 2} ≈ 10 and we find no evidence for a primordial signal, in contrast to a previous analysis of the WMAP9 dataset, for which Δχ{sup 2} ≈ 20. The Planck and WMAP9 results are broadly consistent on angular scales where they are expected to agree as far as best-fit values are concerned. However, even on these scales the significance of the signal is reduced in Planck relative to WMAP, and is consistent with a fit to the ''noise'' associated with cosmic variance. Our results motivate both a detailed comparison between the two experiments and a more careful study of the theoretical predictions of monodromy inflation.
Veil, J.A.
1994-06-01
This paper examines the economic and environmental impact to the power industry of limiting thermal mixing zones to 1000 feet and eliminating the Clean Water Act {section}316(a) variance. Power companies were asked what they would do if these two conditions were imposed. Most affected plants would retrofit cooling towers and some would retrofit diffusers. Assuming that all affected plants would proportionally follow the same options as the surveyed plants, the estimated capital cost of retrofitting cooling towers or diffusers at all affected plants exceeds $20 billion. Since both cooling towers and diffusers exert an energy penalty on a plant`s output, the power companies must generate additional power. The estimated cost of the additional power exceeds $10 billion over 20 years. Generation of the extra power would emit over 8 million tons per year of additional carbon dioxide. Operation of the new cooling towers would cause more than 1.5 million gallons per minute of additional evaporation.
Features of MCNP6 Relevant to Medical Radiation Physics
Hughes, H. Grady III; Goorley, John T.
2012-08-29
MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-based isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.
Assessment of global warming effect on the level of extremes and intra-annual structure
Lobanov, V.A.
1997-12-31
In this research a new approach for the parametrization of intra-annual Variations has been developed that is based on the poly-linear decomposition and relationships with average climate conditions. This method allows to divide the complex intra-annual variations during every year into two main parts: climate and synoptic processes. In this case, the climate process is presented by two coefficients (B1, B0) of linear function between the particular year data and average intra-year conditions over the long-term period. Coefficient B1 is connected with an amplitude of intra-annual function and characterizes the extremes events and BO-coefficient obtaines the level of climate conditions realization in the particular year. The synoptic process is determined as the remainders or errors of every year linear function or their generalized parameter, such as variance.
Energy Science and Technology Software Center (OSTI)
2012-01-05
ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less
Seasonal cycle dependence of temperature fluctuations in the atmosphere. Master's thesis
Tobin, B.F.
1994-08-01
The correlation statistics of meteorological fields have been of interest in weather forecasting for many years and are also of interest in climate studies. A better understanding of the seasonal variation of correlation statistics can be used to determine how the seasonal cycle of temperature fluctuations should be simulated in noise-forced energy balance models. It is shown that the length scale does have a seasonal dependence and will have to be handled through the seasonal modulation of other coefficients in noise-forced energy balance models. The temperature field variance and spatial correlation fluctuations exhibit seasonality with fluctuation amplitudes larger in the winter hemisphere and over land masses. Another factor contributing to seasonal differences is the larger solar heating gradient in the winter.
An optical beam frequency reference with 10{sup -14} range frequency instability
McFerran, J. J.; Hartnett, J. G.; Luiten, A. N. [School of Physics, University of Western Australia, 35 Stirling Highway, Crawley, 6009 Western Australia (Australia)
2009-07-20
The authors report on a thermal beam optical frequency reference with a fractional frequency instability of 9.2x10{sup -14} at 1 s reducing to 2.0x10{sup -14} at 64 s before slowly rising. The {sup 1}S{sub 0}{r_reversible}{sup 3}P{sub 1} intercombination line in neutral {sup 40}Ca is used as a frequency discriminator. A diode laser at 423 nm probes the ground state population after a Ramsey-Borde sequence of 657 nm light-field interactions on the atoms. The measured fractional frequency instability is an order of magnitude improvement on previously reported thermal beam optical clocks. The photon shot-noise of the read-out produces a limiting square root {lambda}-variance of 7x10{sup -14}/{radical}({tau})
Detection limits for real-time source water monitoring using indigenous freshwater microalgae
Rodriguez Jr, Miguel; Greenbaum, Elias
2009-01-01
This research identified toxin detection limits using the variable fluorescence of naturally occurring microalgae in source drinking water for five chemical toxins with different molecular structures and modes of toxicity. The five chemicals investigated were atrazine, Diuron, paraquat, methyl parathion, and potassium cyanide. Absolute threshold sensitivities of the algae for detection of the toxins in unmodified source drinking water were measured. Differential kinetics between the rate of action of the toxins and natural changes in algal physiology, such as diurnal photoinhibition, are significant enough that effects of the toxin can be detected and distinguished from the natural variance. This is true even for physiologically impaired algae where diminished photosynthetic capacity may arise from uncontrollable external factors such as nutrient starvation. Photoinhibition induced by high levels of solar radiation is a predictable and reversible phenomenon that can be dealt with using a period of dark adaption of 30 minutes or more.
Intrinsic fluctuations of dust grain charge in multi-component plasmas
Shotorban, B.
2014-03-15
A master equation is formulated to model the states of the grain charge in a general multi-component plasma, where there are electrons and various kinds of positive or negative ions that are singly or multiply charged. A Fokker-Planck equation is developed from the master equation through the system-size expansion method. The Fokker-Planck equation has a Gaussian solution with a mean and variance governed by two initial-value differential equations involving the rates of the attachment of ions and electrons to the dust grain. Also, a Langevin equation and a discrete stochastic method are developed to model the time variation of the grain charge. Grain charging in a plasma containing electrons, protons, and alpha particles with Maxwellian distributions is considered as an example problem. The Gaussian solution is in very good agreement with the master equation solution numerically obtained for this problem.
De Donato, Cinzia; Sanchez, Federico; Santander, Marcos; Natl.Tech.U., San Rafael; Camin, Daniel; Garcia, Beatriz; Grassi, Valerio; /Milan U. /INFN, Milan
2005-05-01
To accurately reconstruct a shower axis from the Fluorescence Detector data it is essential to establish with high precision the absolute pointing of the telescopes. To d that they calculate the absolute pointing of a telescope using sky background data acquired during regular data taking periods. The method is based on the knowledge of bright star's coordinates that provide a reliable and stable coordinate system. it can be used to check the absolute telescope's pointing and its long-term stability during the whole life of the project, estimated in 20 years. They have analyzed background data taken from January to October 2004 to determine the absolute pointing of the 12 telescopes installed both in Los Leones and Coihueco. The method is based on the determination of the mean-time of the variance signal left by a star traversing a PMT's photocathode which is compared with the mean-time obtained by simulating the track of that star on the same pixel.
Dynamical mass generation in unquenched QED using the Dyson-Schwinger equations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kızılersü, Ayse; Sizer, Tom; Pennington, Michael R.; Williams, Anthony G.; Williams, Richard
2015-03-13
We present a comprehensive numerical study of dynamical mass generation for unquenched QED in four dimensions, in the absence of four-fermion interactions, using the Dyson-Schwinger approach. We begin with an overview of previous investigations of criticality in the quenched approximation. To this we add an analysis using a new fermion-antifermion-boson interaction ansatz, the Kizilersu-Pennington (KP) vertex, developed for an unquenched treatment. After surveying criticality in previous unquenched studies, we investigate the performance of the KP vertex in dynamical mass generation using a renormalized fully unquenched system of equations. This we compare with the results for two hybrid vertices incorporating themore » Curtis-Pennington vertex in the fermion equation. We conclude that the KP vertex is as yet incomplete, and its relative gauge-variance is due to its lack of massive transverse components in its design.« less
Foltz, Gregory R.; Balaguru, Karthik; Leung, Lai-Yung R.
2015-02-28
The impact of tropical cyclones on surface chlorophyll concentration is assessed in the western subtropical North Atlantic Ocean during 19982011. Previous studies in this area focused on individual cyclones and gave mixed results regarding the importance of tropical cyclone-induced mixing for changes in surface chlorophyll. Using a more integrated and comprehensive approach that includes quantification of cyclone-induced changes in mixed layer depth, here it is shown that accumulated cyclone energy explains 22% of the interannual variability in seasonally-averaged (JuneNovember) chlorophyll concentration in the western subtropical North Atlantic, after removing the influence of the North Atlantic Oscillation (NAO). The variance explained by tropical cyclones is thus about 70% of that explained by the NAO, which has well-known impacts in this region. It is therefore likely that tropical cyclones contribute significantly to interannual variations of primary productivity in the western subtropical North Atlantic during the hurricane season.
Cyberspace Security Econometrics System (CSES)
Energy Science and Technology Software Center (OSTI)
2012-07-27
Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing complance pressures require careful consideration and adaption. The CSES provides a measure (i.e. a quantitative indication) of reliability, performance, and/or safety of a system that accounts for themore » criticality of each requirement as a function of one or more stakeholders' interests in that requirement. For a given stakeholder, CSES accounts for the variance that may exist among the stakes one attaches to meeting each requirement.« less
Sukhovoj, A. M. Khitrov, V. A.
2013-01-15
A modified model is developed for describing the distribution of random resonance width for any nuclei. The model assumes the coexistence in a nucleus of one or several partial radiative and neutron amplitudes for respective resonance widths, these amplitudes differing in their parameters. Also, it is assumed that amplitude can be described by a Gaussian curve characterized by a nonzero mean value and a variance not equal to unity and that their most probable values can be obtained with the highest reliability from approximations of cumulative sums of respective widths. An analysis of data for 157 sets of neutron widths for 0 {<=} l {<=} 3 and for 56 sets of total radiative widths has been performed to date. The basic result of this analysis is the following: both for neutron and for total radiative widths, the experimental set of resonance width can be represented with a rather high probability in the form of a superposition of k {<=} 4 types differing in mean amplitude parameters.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Kreinovich, Vladik; Oberkampf, William Louis; Ginzburg, Lev; Ferson, Scott; Hajagos, Janos
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Model for spectral and chromatographic data
Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA
2002-11-26
A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.
Cyberspace Security Econometrics System (CSES) - U.S. Copyright TXu 1-901-039
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T; Lantz, Margaret W; Hauser, Katie R
2014-01-01
Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation. The Cyberspace Security Econometrics System (CSES) provides a measure (i.e., a quantitative indication) of reliability, performance, and/or safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. For a given stakeholder, CSES accounts for the variance that may exist among the stakes one attaches to meeting each requirement. The basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural and mathematical underpinnings contained in this copyright.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
Application of Entry-Time Processes to Asset Management in Nuclear Power Plants
Nelson, Paul; Wang, Shuwen; Kee, Ernie J.
2006-07-01
The entry-time approach to dynamic reliability is based upon computational solution of the Chapman-Kolmogorov (generalized state-transition) equations underlying a certain class of marked point processes. Previous work has verified a particular finite-difference approach to computational solution of these equations. The objective of this work is to illustrate the potential application of the entry-time approach to risk-informed asset management (RIAM) decisions regarding maintenance or replacement of major systems within a plant. Results are presented in the form of plots, with replacement/maintenance period as a parameter, of expected annual revenue, along with annual variance and annual skewness as indicators of associated risks. Present results are for a hypothetical system, to illustrate the capability of the approach, but some considerations related to potential application of this approach to nuclear power plants are discussed. (authors)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Method and apparatus for detection of chemical vapors
Mahurin, Shannon Mark; Dai, Sheng; Caja, Josip
2007-05-15
The present invention is a gas detector and method for using the gas detector for detecting and identifying volatile organic and/or volatile inorganic substances present in unknown vapors in an environment. The gas detector comprises a sensing means and a detecting means for detecting electrical capacitance variance of the sensing means and for further identifying the volatile organic and volatile inorganic substances. The sensing means comprises at least one sensing unit and a sensing material allocated therein the sensing unit. The sensing material is an ionic liquid which is exposed to the environment and is capable of dissolving a quantity of said volatile substance upon exposure thereto. The sensing means constitutes an electrochemical capacitor and the detecting means is in electrical communication with the sensing means.
Deconstructing Solar Photovoltaic Pricing: The Role of Market Structure, Technology and Policy
Broader source: Energy.gov [DOE]
Solar photovoltaic (PV) system prices in the United States are considerably different both across geographic locations and within a given location. Variances in price may arise due to state and federal policies, differences in market structure, and other factors that influence demand and costs. This paper examines the relative importance of such factors on the stability of solar PV system prices in the United States using a detailed dataset of roughly 100,000 recent residential and small commercial installations. The paper finds that PV system prices differ based on characteristics of the systems. More interestingly, evidence suggests that search costs and imperfect competition affect solar PV pricing. Installer density substantially lowers prices, while regions with relatively generous financial incentives for solar PV are associated with higher prices.
System for monitoring non-coincident, nonstationary process signals
Gross, Kenneth C.; Wegerich, Stephan W.
2005-01-04
An improved system for monitoring non-coincident, non-stationary, process signals. The mean, variance, and length of a reference signal is defined by an automated system, followed by the identification of the leading and falling edges of a monitored signal and the length of the monitored signal. The monitored signal is compared to the reference signal, and the monitored signal is resampled in accordance with the reference signal. The reference signal is then correlated with the resampled monitored signal such that the reference signal and the resampled monitored signal are coincident in time with each other. The resampled monitored signal is then compared to the reference signal to determine whether the resampled monitored signal is within a set of predesignated operating conditions.
Samedov, V. V.; Tulinov, B. M.
2011-07-01
Superconducting tunnel junction (STJ) detector consists of two layers of superconducting material separated by thin insulating barrier. An incident particle produces in superconductor excess nonequilibrium quasiparticles. Each quasiparticle in superconductor should be considered as quantum superposition of electron-like and hole-like excitations. This duality nature of quasiparticle leads to the effect of multi-tunneling. Quasiparticle starts to tunnel back and forth through the insulating barrier. After tunneling from biased electrode quasiparticle loses its energy via phonon emission. Eventually, the energy that equals to the difference in quasiparticle energy between two electrodes is deposited in the signal electrode. Because of the process of multi-tunneling, one quasiparticle can deposit energy more than once. In this work, the theory of branching cascade processes was applied to the process of energy deposition caused by the quasiparticle multi-tunneling. The formulae for the mean value and variance of the energy transferred by one quasiparticle into heat were derived. (authors)
MEASUREMENT OF THE SHOCK-HEATED MELT CURVE OF LEAD USING PYROMETRY AND REFLECTOMETRY
D. Partouche-Sebban and J. L. Pelissier, Commissariat a` l'Energie Atomique,; F. G. Abeyta, Los Alamos National Laboratory; W. W. Anderson, Los Alamos National Laboratory; M. E. Byers, Los Alamos National Laboratory; D. Dennis-Koller, Los Alamos National Laboratory; J. S. Esparza, Los Alamos National Laboratory; S. D. Borror, Bechtel Nevada; C. A. Kruschwitz, Bechtel Nevada
2004-01-01
Data on the high-pressure melting temperatures of metals is of great interest in several fields of physics including geophysics. Measuring melt curves is difficult but can be performed in static experiments (with laser-heated diamond-anvil cells for instance) or dynamically (i.e., using shock experiments). However, at the present time, both experimental and theoretical results for the melt curve of lead are at too much variance to be considered definitive. As a result, we decided to perform a series of shock experiments designed to provide a measurement of the melt curve of lead up to about 50 GPa in pressure. At the same time, we developed and fielded a new reflectivity diagnostic, using it to make measurements on tin. The results show that the melt curve of lead is somewhat higher than the one previously obtained with static compression and heating techniques.
Transit light curves with finite integration time: Fisher information analysis
Price, Ellen M.; Rogers, Leslie A.
2014-10-10
Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/?eprice.
Quality by design in the nuclear weapons complex
Ikle, D.N.
1988-04-01
Modern statistical quality control has evolved beyond the point at which control charts and sampling plans are sufficient to maintain a competitive position. The work of Genichi Taguchi in the early 1970's has inspired a renewed interest in the application of statistical methods of experimental design at the beginning of the manufacturing cycle. While there has been considerable debate over the merits of some of Taguchi's statistical methods, there is increasing agreement that his emphasis on cost and variance reduction is sound. The key point is that manufacturing processes can be optimized in development before they get to production by identifying a region in the process parameter space in which the variance of the process is minimized. Therefore, for performance characteristics having a convex loss function, total product cost is minimized without substantially increasing the cost of production. Numerous examples of the use of this approach in the United States and elsewhere are available in the literature. At the Rocky Flats Plant, where there are severe constraints on the resources available for development, a systematic development strategy has been developed to make efficient use of those resources to statistically characterize critical production processes before they are introduced into production. This strategy includes the sequential application of fractional factorial and response surface designs to model the features of critical processes as functions of both process parameters and production conditions. This strategy forms the basis for a comprehensive quality improvement program that emphasizes prevention of defects throughout the product cycle. It is currently being implemented on weapons programs in development at Rocky Flats and is in the process of being applied at other production facilities in the DOE weapons complex. 63 refs.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-coupled- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Coordinating Garbage Collection for Arrays of Solid-state Drives
Kim, Youngjae; Lee, Junghee; Oral, H Sarp; Dillow, David A; Wang, Feiyi; Shipman, Galen M
2014-01-01
Although solid-state drives (SSDs) offer significant performance improvements over hard disk drives (HDDs) for a number of workloads, they can exhibit substantial variance in request latency and throughput as a result of garbage collection (GC). When GC conflicts with an I/O stream, the stream can make no forward progress until the GC cycle completes. GC cycles are scheduled by logic internal to the SSD based on several factors such as the pattern, frequency, and volume of write requests. When SSDs are used in a RAID with currently available technology, the lack of coordination of the SSD-local GC cycles amplifies this performance variance. We propose a global garbage collection (GGC) mechanism to improve response times and reduce performance variability for a RAID of SSDs. We include a high-level design of SSD-aware RAID controller and GGC-capable SSD devices and algorithms to coordinate the GGC cycles. We develop reactive and proactive GC coordination algorithms and evaluate their I/O performance and block erase counts for various workloads. Our simulations show that GC coordination by a reactive scheme improves average response time and reduces performance variability for a wide variety of enterprise workloads. For bursty, write-dominated workloads, response time was improved by 69% and performance variability was reduced by 71%. We show that a proactive GC coordination algorithm can further improve the I/O response times by up to 9% and the performance variability by up to 15%. We also observe that it could increase the lifetimes of SSDs with some workloads (e.g. Financial) by reducing the number of block erase counts by up to 79% relative to a reactive algorithm for write-dominant enterprise workloads.
On the reliability of microvariability tests in quasars
De Diego, Jos A.
2014-11-01
Microvariations probe the physics and internal structure of quasars. Unpredictability and small flux variations make this phenomenon elusive and difficult to detect. Variance-based probes such as the C and F tests, or a combination of both, are popular methods to compare the light curves of the quasar and a comparison star. Recently, detection claims in some studies have depended on the agreement of the results of the C and F tests, or of two instances of the F-test, for rejecting the non-variation null hypothesis. However, the C-test is a non-reliable statistical procedure, the F-test is not robust, and the combination of tests with concurrent results is anything but a straightforward methodology. A priori power analysis calculations and post hoc analysis of Monte Carlo simulations show excellent agreement for the analysis of variance test to detect microvariations as well as the limitations of the F-test. Additionally, the combined tests yield correlated probabilities that make the assessment of statistical significance unworkable. However, it is possible to include data from several field stars to enhance the power in a single F-test, increasing the reliability of the statistical analysis. This would be the preferred methodology when several comparison stars are available. An example using two stars and the enhanced F-test is presented. These results show the importance of using adequate methodologies and avoiding inappropriate procedures that can jeopardize microvariability detections. Power analysis and Monte Carlo simulations are useful tools for research planning, as they can demonstrate the robustness and reliability of different research approaches.
Probabilistic cost estimation methods for treatment of water extracted during CO_{2} storage and EOR
Graham, Enid J. Sullivan; Chu, Shaoping; Pawar, Rajesh J.
2015-08-08
Extraction and treatment of in situ water can minimize risk for large-scale CO_{2} injection in saline aquifers during carbon capture, utilization, and storage (CCUS), and for enhanced oil recovery (EOR). Additionally, treatment and reuse of oil and gas produced waters for hydraulic fracturing will conserve scarce fresh-water resources. Each treatment step, including transportation and waste disposal, generates economic and engineering challenges and risks; these steps should be factored into a comprehensive assessment. We expand the water treatment model (WTM) coupled within the sequestration system model CO_{2}-PENS and use chemistry data from seawater and proposed injection sites in Wyoming, to demonstrate the relative importance of different water types on costs, including little-studied effects of organic pretreatment and transportation. We compare the WTM with an engineering water treatment model, utilizing energy costs and transportation costs. Specific energy costs for treatment of Madison Formation brackish and saline base cases and for seawater compared closely between the two models, with moderate differences for scenarios incorporating energy recovery. Transportation costs corresponded for all but low flow scenarios (<5000 m^{3}/d). Some processes that have high costs (e.g., truck transportation) do not contribute the most variance to overall costs. Other factors, including feed-water temperature and water storage costs, are more significant contributors to variance. These results imply that the WTM can provide good estimates of treatment and related process costs (AACEI equivalent level 5, concept screening, or level 4, study or feasibility), and the complex relationships between processes when extracted waters are evaluated for use during CCUS and EOR site development.
A stochastic extension of the explicit algebraic subgrid-scale models
Rasam, A. Brethouwer, G.; Johansson, A. V.
2014-05-15
The explicit algebraic subgrid-scale (SGS) stress model (EASM) of Marstorp et al. [Explicit algebraic subgrid stress models with application to rotating channel flow, J. Fluid Mech. 639, 403432 (2009)] and explicit algebraic SGS scalar flux model (EASFM) of Rasam et al. [An explicit algebraic model for the subgrid-scale passive scalar flux, J. Fluid Mech. 721, 541577 (2013)] are extended with stochastic terms based on the Langevin equation formalism for the subgrid-scales by Marstorp et al. [A stochastic subgrid model with application to turbulent flow and scalar mixing, Phys. Fluids 19, 035107 (2007)]. The EASM and EASFM are nonlinear mixed and tensor eddy-diffusivity models, which improve large eddy simulation (LES) predictions of the mean flow, Reynolds stresses, and scalar fluxes of wall-bounded flows compared to isotropic eddy-viscosity and eddy-diffusivity SGS models, especially at coarse resolutions. The purpose of the stochastic extension of the explicit algebraic SGS models is to further improve the characteristics of the kinetic energy and scalar variance SGS dissipation, which are key quantities that govern the small-scale mixing and dispersion dynamics. LES of turbulent channel flow with passive scalar transport shows that the stochastic terms enhance SGS dissipation statistics such as length scale, variance, and probability density functions and introduce a significant amount of backscatter of energy from the subgrid to the resolved scales without causing numerical stability problems. The improvements in the SGS dissipation predictions in turn enhances the predicted resolved statistics such as the mean scalar, scalar fluxes, Reynolds stresses, and correlation lengths. Moreover, the nonalignment between the SGS stress and resolved strain-rate tensors predicted by the EASM with stochastic extension is in much closer agreement with direct numerical simulation data.
Energy Science and Technology Software Center (OSTI)
2011-01-03
Bulk Data Mover (BDM) is a high-level data transfer management tool. BDM handles the issue of large variance in file sizes and a big portion of small files by managing the file transfers with optimized transfer queue and concurrency management algorithms. For example, climate simulation data sets are characterized by large volume of files with extreme variance in file sizes. The BDN achieves high performance using a variety of techniques, including multi-thraded concurrent transfer connections,more » data channel caching, load balancing over multiple transfer servers, and storage i/o pre-fetching. Logging information from the BDM is collected and analyzed to study the effectiveness of the transfer management algorithms. The BDM can accept a request composed of multiple files or an entire directory. The request also contains the target site and directory where the replicated files will reside. If a directory is provided at the source, then the BDM will replicate the structure of the source directory at the target site. The BDM is capable of transferring multiple files concurrently as well as using parallel TCP streams. The optimal level of concurrency or parallel streams depends on the bandwidth capacity of the storage systems at both ends of the transfer as well as achievable bandwidth of the wide-area network. Hardware req.-PC, MAC, Multi-platform & Workstation; Software req.: Compile/version-Java 1.50_x or ablove; Type of files: source code, executable modules, installation instructions other, user guide; URL: http://sdm.lbl.gov/bdm/« less
Hamano, Satoshi; Kobayashi, Naoto [Institute of Astronomy, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Kondo, Sohei [Koyama Astronomical Observatory, Kyoto-Sangyo University, Motoyama, Kamigamo, Kita-Ku, Kyoto 603-8555 (Japan); Tsujimoto, Takuji [National Astronomical Observatory of Japan and Department of Astronomical Science, Graduate University for Advanced Studies, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Okoshi, Katsuya [Faculty of Industrial Science and Technology, Tokyo University of Science, 102-1 Tomino, Oshamanbe, Hokkaido 049-3514 (Japan); Shigeyama, Toshikazu, E-mail: hamano@ioa.s.u-tokyo.ac.jp [Research Center for the Early Universe, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan)
2012-08-01
Using the Subaru 8.2 m Telescope with the IRCS Echelle spectrograph, we obtained high-resolution (R = 10,000) near-infrared (1.01-1.38 {mu}m) spectra of images A and B of the gravitationally lensed QSO B1422+231 (z = 3.628) consisting of four known lensed images. We detected Mg II absorption lines at z = 3.54, which show a large variance of column densities ({approx}0.3 dex) and velocities ({approx}10 km s{sup -1}) between sightlines A and B with a projected separation of only 8.4h{sup -1}{sub 70} pc at that redshift. This is the smallest spatial structure of the high-z gas clouds ever detected after Rauch et al. found a 20 pc scale structure for the same z = 3.54 absorption system using optical spectra of images A and C. The observed systematic variances imply that the system is an expanding shell as originally suggested by Rauch et al. By combining the data for three sightlines, we managed to constrain the radius and expansion velocity of the shell ({approx}50-100 pc, 130 km s{sup -1}), concluding that the shell is truly a supernova remnant (SNR) rather than other types of shell objects, such as a giant H II region. We also detected strong Fe II absorption lines for this system, but with much broader Doppler width than that of {alpha}-element lines. We suggest that this Fe II absorption line originates in a localized Fe II-rich gas cloud that is not completely mixed with plowed ambient interstellar gas clouds showing other {alpha}-element low-ion absorption lines. Along with the Fe richness, we conclude that the SNR is produced by an SN Ia explosion.
SU-F-18C-15: Model-Based Multiscale Noise Reduction On Low Dose Cone Beam Projection
Yao, W; Farr, J
2014-06-15
Purpose: To improve image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low dose cone beam CT (CBCT) imaging systems, Poisson process governs the randomness of photon fluence at x-ray source and the detector because of the independent binomial process of photon absorption in medium. On a CBCT projection, the variance of fluence consists of the variance of noiseless imaging structure and that of Poisson noise, which is proportional to the mean (noiseless) of the fluence at the detector. This requires multiscale filters to smoothen noise while keeping the structure information of the imaged object. We used a mathematical model of Poisson process to design multiscale filters and established the balance of noise correction and structure blurring. The algorithm was checked with low dose kilo-voltage CBCT projections acquired from a Varian OBI system. Results: From the investigation of low dose CBCT of a Catphan phantom and patients, it showed that our model-based multiscale technique could efficiently reduce noise and meanwhile keep the fine structure of the imaged object. After the image processing, the number of visible line pairs in Catphan phantom scanned with 4 ms pulse time was similar to that scanned with 32 ms, and soft tissue structure from simulated 4 ms patient head-and-neck images was also comparable with scanned 20 ms ones. Compared with fixed-scale technique, the image quality from multiscale one was improved. Conclusion: Use of projection-specific multiscale filters can reach better balance on noise reduction and structure information loss. The image quality of low dose CBCT can be improved by using multiscale filters.
Investigation of advanced UQ for CRUD prediction with VIPRE.
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinement for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.
Neill, P. H.; Given, P. H.
1984-09-01
The initial aim of this research was to use empirical mathematical relationships to formulate a better understanding of the processes involved in the liquefaction of a set of medium rank high sulfur coals. In all, just over 50 structural parameters and yields of product classes were determined. In order to gain a more complete understanding of the empirical relationships between the various properties, a number of relatively complex statistical procedures and tests were applied to the data, mostly selected from the field of multivariate analysis. These can be broken down into two groups. The first group included grouping techniques such as non-linear mapping, hierarchical and tree clustering, and linear discriminant analyses. These techniques were utilized in determining if more than one statistical population was present in the data set; it was concluded that there was not. The second group of techniques included factor analysis and stepwise multivariate linear regressions. Linear discriminant analyses were able to show that five distinct groups of coals were represented in the data set. However only seven of the properties seemed to follow this trend. The chemical property that appeared to follow the trend most closely was the aromaticity, where a series of five parallel straight lines was observed for a plot of f/sub a/ versus carbon content. The factor patterns for each of the product classes indicated that although each of the individual product classes tended to load on factors defined by specific chemical properties, the yields of the broader product classes, such as total conversion to liquids + gases and conversion to asphaltenes, tended to load largely on factors defined by rank. The variance explained and the communalities tended to be relatively low. Evidently important sources of variance have still to be found.
Probabilistic cost estimation methods for treatment of water extracted during CO2 storage and EOR
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Graham, Enid J. Sullivan; Chu, Shaoping; Pawar, Rajesh J.
2015-08-08
Extraction and treatment of in situ water can minimize risk for large-scale CO2 injection in saline aquifers during carbon capture, utilization, and storage (CCUS), and for enhanced oil recovery (EOR). Additionally, treatment and reuse of oil and gas produced waters for hydraulic fracturing will conserve scarce fresh-water resources. Each treatment step, including transportation and waste disposal, generates economic and engineering challenges and risks; these steps should be factored into a comprehensive assessment. We expand the water treatment model (WTM) coupled within the sequestration system model CO2-PENS and use chemistry data from seawater and proposed injection sites in Wyoming, to demonstratemore » the relative importance of different water types on costs, including little-studied effects of organic pretreatment and transportation. We compare the WTM with an engineering water treatment model, utilizing energy costs and transportation costs. Specific energy costs for treatment of Madison Formation brackish and saline base cases and for seawater compared closely between the two models, with moderate differences for scenarios incorporating energy recovery. Transportation costs corresponded for all but low flow scenarios (<5000 m3/d). Some processes that have high costs (e.g., truck transportation) do not contribute the most variance to overall costs. Other factors, including feed-water temperature and water storage costs, are more significant contributors to variance. These results imply that the WTM can provide good estimates of treatment and related process costs (AACEI equivalent level 5, concept screening, or level 4, study or feasibility), and the complex relationships between processes when extracted waters are evaluated for use during CCUS and EOR site development.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vickers, D.; Thomas, C.
2014-05-13
Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore » momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less
Ensslin, Torsten A.; Frommert, Mona [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.
TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT
Niu, T; Dong, X; Petrongolo, M; Zhu, L
2014-06-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.
Iterative image-domain decomposition for dual-energy CT
Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.
Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation
Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael; Ramuhalli, Pradeep; Jeffries, Brien; Nam, Alan; Strong, Eric; Tong, Matthew; Welz, Zachary; Barbieri, Federico; Langford, Seth; Meinweiser, Gregory; Weeks, Matthew
2014-11-06
On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning of component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL es
Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.
2009-12-17
In September 2008 a large-scale testing operation (referred to as the INL-2 test) was performed within a two-story building (PBF-632) at the Idaho National Laboratory (INL). The report “Operational Observations on the INL-2 Experiment” defines the seven objectives for this test and discusses the results and conclusions. This is further discussed in the introduction of this report. The INL-2 test consisted of five tests (events) in which a floor (level) of the building was contaminated with the harmless biological warfare agent simulant Bg and samples were taken in most, if not all, of the rooms on the contaminated floor. After the sampling, the building was decontaminated, and the next test performed. Judgmental samples and probabilistic samples were determined and taken during each test. Vacuum, wipe, and swab samples were taken within each room. The purpose of this report is to study an additional four topics that were not within the scope of the original report. These topics are: 1) assess the quantitative assumptions about the data being normally or log-normally distributed; 2) evaluate differences and quantify the sample to sample variability within a room and across the rooms; 3) perform geostatistical types of analyses to study spatial correlations; and 4) quantify the differences observed between surface types and sampling methods for each scenario and study the consistency across the scenarios. The following four paragraphs summarize the results of each of the four additional analyses. All samples after decontamination came back negative. Because of this, it was not appropriate to determine if these clearance samples were normally distributed. As Table 1 shows, the characterization data consists of values between and inclusive of 0 and 100 CFU/cm2 (100 was the value assigned when the number is too numerous to count). The 100 values are generally much bigger than the rest of the data, causing the data to be right skewed. There are also a significant number of zeros. Using QQ plots these data characteristics show a lack of normality from the data after contamination. Normality is improved when looking at log(CFU/cm2). Variance component analysis (VCA) and analysis of variance (ANOVA) were used to estimate the amount of variance due to each source and to determine which sources of variability were statistically significant. In general, the sampling methods interacted with the across event variability and with the across room variability. For this reason, it was decided to do analyses for each sampling method, individually. The between event variability and between room variability were significant for each method, except for the between event variability for the swabs. For both the wipes and vacuums, the within room standard deviation was much larger (26.9 for wipes and 7.086 for vacuums) than the between event standard deviation (6.552 for wipes and 1.348 for vacuums) and the between room standard deviation (6.783 for wipes and 1.040 for vacuums). Swabs between room standard deviation was 0.151, while both the within room and between event standard deviations are less than 0.10 (all measurements in CFU/cm2).
Lu, Guoping; Zheng, Chunmiao; Wolfsberg, Andrew
2002-01-05
A Monte Carlo analysis was conducted to investigate the effect of uncertain hydraulic conductivity on the fate and transport of BTEX compounds (benzene, toluene, ethyl benzene, and xylene) at a field site on Hill Air Force Base, Utah. Microbially mediated BTEX degradation has occurred at the site through multiple terminal electron-accepting processes, including aerobic respiration, denitrification, Fe(III) reduction, sulfate reduction, and methanogenesis degradation. Multiple realizations of the hydraulic conductivity field were generated and substituted into a multispecies reactive transport model developed and calibrated for the Hill AFB site in a previous study. Simulation results show that the calculated total BTEX masses (released from a constant-concentration source) that remain in the aquifer at the end of the simulation period statistically follow a lognormal distribution. In the first analysis (base case), the calculated total BTEX mass varies from a minimum of 12% less and a maximum of 60% more than that of the previously calibrated model. This suggests that the uncertainty in hydraulic conductivity can lead to significant uncertainties in modeling the fate and transport of BTEX. Geometric analyses of calculated plume configurations show that a higher BTEX mass is associated with wider lateral spreading, while a lower mass is associated with longer longitudinal extension. More BTEX mass in the aquifer causes either a large depletion of dissolved oxygen (DO) and NO{sub 3}{sup -}, or a large depletion of DO and a large production of Fe{sup 2+}, with moderately depleted NO{sub 3}{sup -}. In an additional analysis, the effect of varying degrees of aquifer heterogeneity and associated uncertainty is examined by considering hydraulic conductivity with different variances and correlation lengths. An increase in variance leads to a higher average BTEX mass in the aquifer, while an increase in correlation length results in a lower average. This observation is explained by relevant partitioning of BTEX into the aquifer from the LNAPL source. Although these findings may only be applicable to the field conditions considered in this study, the methodology used and insights gained are of general interest and relevance to other fuel-hydrocarbon natural-attenuation sites.
URBAN WOOD/COAL CO-FIRING IN THE BELLEFIELD BOILERPLANT
James T. Cobb Jr.; Gene E. Geiger; William W. Elder III; William P. Barry; Jun Wang; Hongming Li
2004-04-08
An Environmental Questionnaire for the demonstration at the Bellefield Boiler Plant (BBP) was submitted to the national Energy Technology Laboratory. An R&D variance for the air permit at the BBP was sought from the Allegheny County Health Department (ACHD). R&D variances for the solid waste permits at the J. A. Rutter Company (JARC), and Emery Tree Service (ETS) were sought from the Pennsylvania Department of Environmental Protection (PADEP). Construction wood was acquired from Thompson Properties and Seven D Corporation. Verbal authorizations were received in all cases. Memoranda of understanding were executed by the University of Pittsburgh with BBP, JARC and ETS. Construction wood was collected from Thompson Properties and from Seven D Corporation. Forty tons of pallet and construction wood were ground to produce BioGrind Wood Chips at JARC and delivered to Mon Valley Transportation Company (MVTC). Five tons of construction wood were hammer milled at ETS and half of the product delivered to MVTC. Blends of wood and coal, produced at MVTC by staff of JARC and MVTC, were shipped by rail to BBP. The experimental portion of the project was carried out at BBP in late March and early April 2001. Several preliminary tests were successfully conducted using blends of 20% and 33% wood by volume. Four one-day tests using a blend of 40% wood by volume were then carried out. Problems of feeding and slagging were experienced with the 40% blend. Light-colored fly ash was observed coming from the stack during all four tests. Emissions of SO{sub 2}, NOx and total particulates, measured by Energy Systems Associates, decreased when compared with combusting coal alone. A procedure for calculating material and energy balances on BBP's Boiler No.1 was developed, using the results of an earlier compliance test at the plant. Material and energy balances were then calculated for the four test periods. Boiler efficiency was found to decrease slightly when the fuel was shifted from coal to the 40% blend. Neither commercial production of sized urban waste wood for the energy market in Pittsburgh nor commercial cofiring of wood/coal blends at BBP are anticipated in the near future.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kleinman, L.; Kuang, C.; Sedlacek, A.; Senum, G.; Springston, S.; Wang, J.; Zhang, Q.; Jayne, J.; Fast, J.; Hubbe, J.; et al
2015-09-17
During the Carbonaceous Aerosols and Radiative Effects Study (CARES) the DOE G-1 aircraft was used to sample aerosol and gas phase compounds in the Sacramento, CA plume and surrounding region. We present data from 66 plume transects obtained during 13 flights in which southwesterly winds transported the plume towards the foothills of the Sierra Nevada Mountains. Plume transport occurred partly over land with high isoprene emission rates. Our objective is to empirically determine whether organic aerosol (OA) can be attributed to anthropogenic or biogenic sources, and to determine whether there is a synergistic effect whereby OA concentrations are enhanced bymorethe simultaneous presence of high concentrations of CO and either isoprene, MVK+MACR (sum of methyl vinyl ketone and methacrolein) or methanol, which are taken as tracers of anthropogenic and biogenic emissions, respectively. Linear and bilinear correlations between OA, CO, and each of three biogenic tracers, "Bio", for individual plume transects indicate that most of the variance in OA over short time and distance scales can be explained by CO. For each transect and species a plume perturbation, (i.e., ?OA, defined as the difference between 90th and 10th percentiles) was defined and regressions done amongst ? values in order to probe day to day and location dependent variability. Species that predicted the largest fraction of the variance in ?OA were ?O3 and ?CO. Background OA was highly correlated with background methanol and poorly correlated with other tracers. Because background OA was ~ 60 % of peak OA in the urban plume, peak OA should be primarily biogenic and therefore non-fossil. Transects were split into subsets according to the percentile rankings of ?CO and ?Bio, similar to an approach used by Setyan et al. (2012) and Shilling et al. (2013) to determine if anthropogenic-biogenic interactions enhance OA production. As found earlier, ?OA in the data subset having high ?CO and high ?Bio was several-fold greater than in other subsets. Part of this difference is consistent with a synergistic interaction between anthropogenic and biogenic precursors and part to an independent linear dependence of ?OA on precursors. Highest values of ?O3 also occur in the high ?COhigh ?Bio data set, raising the possibility that the coincidence of high concentrations of anthropogenic and biogenic tracers as well as OA and O3 may be associated with high temperatures, clear skies, and poor ventilation in addition to specific interaction between anthropogenic and biogenic compounds.less
Strain-dependent Damage in Mouse Lung After Carbon Ion Irradiation
Moritake, Takashi; Proton Medical Research Center, University of Tsukuba, Tsukuba ; Fujita, Hidetoshi; Yanagisawa, Mitsuru; Nakawatari, Miyako; Imadome, Kaori; Nakamura, Etsuko; Iwakawa, Mayumi; Imai, Takashi
2012-09-01
Purpose: To examine whether inherent factors produce differences in lung morbidity in response to carbon ion (C-ion) irradiation, and to identify the molecules that have a key role in strain-dependent adverse effects in the lung. Methods and Materials: Three strains of female mice (C3H/He Slc, C57BL/6J Jms Slc, and A/J Jms Slc) were locally irradiated in the thorax with either C-ion beams (290 MeV/n, in 6 cm spread-out Bragg peak) or with {sup 137}Cs {gamma}-rays as a reference beam. We performed survival assays and histologic examination of the lung with hematoxylin-eosin and Masson's trichrome staining. In addition, we performed immunohistochemical staining for hyaluronic acid (HA), CD44, and Mac3 and assayed for gene expression. Results: The survival data in mice showed a between-strain variance after C-ion irradiation with 10 Gy. The median survival time of C3H/He was significantly shortened after C-ion irradiation at the higher dose of 12.5 Gy. Histologic examination revealed early-phase hemorrhagic pneumonitis in C3H/He and late-phase focal fibrotic lesions in C57BL/6J after C-ion irradiation with 10 Gy. Pleural effusion was apparent in C57BL/6J and A/J mice, 168 days after C-ion irradiation with 10 Gy. Microarray analysis of irradiated lung tissue in the three mouse strains identified differential expression changes in growth differentiation factor 15 (Gdf15), which regulates macrophage function, and hyaluronan synthase 1 (Has1), which plays a role in HA metabolism. Immunohistochemistry showed that the number of CD44-positive cells, a surrogate marker for HA accumulation, and Mac3-positive cells, a marker for macrophage infiltration in irradiated lung, varied significantly among the three mouse strains during the early phase. Conclusions: This study demonstrated a strain-dependent differential response in mice to C-ion thoracic irradiation. Our findings identified candidate molecules that could be implicated in the between-strain variance to early hemorrhagic pneumonitis after C-ion irradiation.
Sisterson, D. L.
2009-01-15
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, they calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The US Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1-(ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208), and for the Tropical Western Pacific (TWP) locale is 1,876.80 hours (0.85 x 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because the data have not yet been released from China to the DMF for processing. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2008, for the fixed sites. The AMF has been deployed to China, but the data have not yet been released. The first quarter comprises a total of 2,208 hours. The average exceeded their goal this quarter.
Groundwater Monitoring Plan for the Hanford Site 216-B-3 Pond RCRA Facility
Barnett, D BRENT.; Smith, Ronald M.; Chou, Charissa J.; McDonald, John P.
2005-11-01
The 216-B-3 Pond system was a series of ponds used for disposal of liquid effluent from past Hanford production facilities. In operation from 1945 to 1997, the B Pond System has been a Resource Conservation and Recovery Act (RCRA) facility since 1986, with RCRA interim-status groundwater monitoring in place since 1988. In 1994 the expansion ponds of the facility were clean closed, leaving only the main pond and a portion of the 216-B-3-3 ditch as the currently regulated facility. In 2001, the Washington State Department of Ecology (Ecology) issued a letter providing guidance for a two-year, trial evaluation of an alternate, intrawell statistical approach to contaminant detection monitoring at the B Pond system. This temporary variance was allowed because the standard indicator-parameters evaluation (pH, specific conductance, total organic carbon, and total organic halides) and accompanying interim status statistical approach is ineffective for detecting potential B-Pond-derived contaminants in groundwater, primarily because this method fails to account for variability in the background data and because B Pond leachate is not expected to affect the indicator parameters. In July 2003, the final samples were collected for the two-year variance period. An evaluation of the results of the alternate statistical approach is currently in progress. While Ecology evaluates the efficacy of the alternate approach (and/or until B Pond is incorporated into the Hanford Facility RCRA Permit), the B Pond system will return to contamination-indicator detection monitoring. Total organic carbon and total organic halides were added to the constituent list beginning with the January 2004 samples. Under this plan, the following wells will be monitored for B Pond: 699-42-42B, 699-43-44, 699-43-45, and 699-44-39B. The wells will be sampled semi-annually for the contamination indicator parameters (pH, specific conductance, total organic carbon, and total organic halides) and annually for water quality parameters (chloride, iron, manganese, phenols, sodium, and sulfate). This plan will remain in effect until superseded by another plan or until B Pond is incorporated into the Hanford Facility RCRA Permit.
What Is the Largest Einstein Radius in the Universe?
Oguri, Masamune; Blandford, Roger D.
2008-08-05
The Einstein radius plays a central role in lens studies as it characterizes the strength of gravitational lensing. In particular, the distribution of Einstein radii near the upper cutoff should probe the probability distribution of the largest mass concentrations in the universe. Adopting a triaxial halo model, we compute expected distributions of large Einstein radii. To assess the cosmic variance, we generate a number of Monte-Carlo realizations of all-sky catalogues of massive clusters. We find that the expected largest Einstein radius in the universe is sensitive to parameters characterizing the cosmological model, especially {sigma}{sub s}: for a source redshift of unity, they are 42{sub -7}{sup +9}, 35{sub -6}{sup +8}, and 54{sub -7}{sup +12} arcseconds (errors denote 1{sigma} cosmic variance), assuming best-fit cosmological parameters of the Wilkinson Microwave Anisotropy Probe five-year (WMAP5), three-year (WMAP3) and one-year (WMAP1) data, respectively. These values are broadly consistent with current observations given their incompleteness. The mass of the largest lens cluster can be as small as {approx} 10{sup 15} M{sub {circle_dot}}. For the same source redshift, we expect in all-sky {approx} 35 (WMAP5), {approx} 15 (WMAP3), and {approx} 150 (WMAP1) clusters that have Einstein radii larger than 2000. For a larger source redshift of 7, the largest Einstein radii grow approximately twice as large. While the values of the largest Einstein radii are almost unaffected by the level of the primordial non-Gaussianity currently of interest, the measurement of the abundance of moderately large lens clusters should probe non-Gaussianity competitively with cosmic microwave background experiments, but only if other cosmological parameters are well-measured. These semi-analytic predictions are based on a rather simple representation of clusters, and hence calibrating them with N-body simulations will help to improve the accuracy. We also find that these 'superlens' clusters constitute a highly biased population. For instance, a substantial fraction of these superlens clusters have major axes preferentially aligned with the line-of-sight. As a consequence, the projected mass distributions of the clusters are rounder by an ellipticity of {approx} 0.2 and have {approx} 40%-60% larger concentrations compared with typical clusters with similar redshifts and masses. We argue that the large concentration measured in A1689 is consistent with our model prediction at the 1.2{sigma} level. A combined analysis of several clusters will be needed to see whether or not the observed concentrations conflict with predictions of the at {Lambda}-dominated cold dark matter model.
SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats
Jacob, R; Carson, J
2014-06-15
Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6) or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J
2014-06-15
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.
Combining weak-lensing tomography and spectroscopic redshift surveys
Cai, Yan -Chuan; Bernstein, Gary
2012-05-11
Redshift space distortion (RSD) is a powerful way of measuring the growth of structure and testing General Relativity, but it is limited by cosmic variance and the degeneracy between galaxy bias b and the growth rate factor f. The cross-correlation of lensing shear with the galaxy density field can in principle measure b in a manner free from cosmic variance limits, breaking the f-b degeneracy and allowing inference of the matter power spectrum from the galaxy survey. We analyze the growth constraints from a realistic tomographic weak lensing photo-z survey combined with a spectroscopic galaxy redshift survey over the same sky area. For sky coverage f_{sky} = 0.5, analysis of the transverse modes measures b to 2-3% accuracy per Δz = 0.1 bin at z < 1 when ~10 galaxies arcmin^{–2} are measured in the lensing survey and all halos with M > M_{min} = 10^{13}h^{–1}M_{⊙} have spectra. For the gravitational growth parameter parameter γ (f = Ω^{γ}_{m}), combining the lensing information with RSD analysis of non-transverse modes yields accuracy σ(γ) ≈ 0.01. Adding lensing information to the RSD survey improves \\sigma(\\gamma) by an amount equivalent to a 3x (10x) increase in RSD survey area when the spectroscopic survey extends down to halo mass 10^{13.5} (10^{14}) h^{–1} M_{⊙}. We also find that the σ(γ) of overlapping surveys is equivalent to that of surveys 1.5-2 times larger if they are separated on the sky. This gain is greatest when the spectroscopic mass threshold is 10^{13} -10^{14} h^{–1} M_{⊙}, similar to LRG surveys. The gain of overlapping surveys is reduced for very deep or very shallow spectroscopic surveys, but any practical surveys are more powerful when overlapped than when separated. As a result, the gain of overlapped surveys is larger in the case when the primordial power spectrum normalization is uncertain by > 0.5%.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vickers, D.; Thomas, C. K.
2014-09-16
Observations of the scale-dependent turbulent fluxes, variances, and the bulk transfer parameterization for sensible heat above, within, and beneath a tall closed Douglas-fir canopy in very weak winds are examined. The daytime sub-canopy vertical velocity spectra exhibit a double-peak structure with peaks at timescales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime sub-canopy heat flux co-spectra. The daytime momentum flux co-spectra in the upper bole space and in the sub-canopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of amore » mean wind direction, and subsequent partitioning of the momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the sub-canopy contribute to upward transfer of momentum, consistent with the observed sub-canopy secondary wind speed maximum. For the smallest resolved scales in the canopy at nighttime, we find increasing vertical velocity variance with decreasing timescale, consistent with very small eddies possibly generated by wake shedding from the canopy elements that transport momentum, but not heat. Unusually large values of the velocity aspect ratio within the canopy were observed, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the very dense canopy. The flux–gradient approach for sensible heat flux is found to be valid for the sub-canopy and above-canopy layers when considered separately in spite of the very small fluxes on the order of a few W m−2 in the sub-canopy. However, single-source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the sub-canopy and above-canopy layers. While sub-canopy Stanton numbers agreed well with values typically reported in the literature, our estimates for the above-canopy Stanton number were much larger, which likely leads to underestimated modeled sensible heat fluxes above dark warm closed canopies.« less
Park, Sungsu
2014-12-12
The main goal of this project is to systematically quantify the major uncertainties of aerosol indirect effects due to the treatment of moist turbulent processes that drive aerosol activation, cloud macrophysics and microphysics in response to anthropogenic aerosol perturbations using the CAM5/CESM1. To achieve this goal, the P.I. hired a postdoctoral research scientist (Dr. Anna Fitch) who started her work from the Nov.1st.2012. In order to achieve the project goal, the first task that the Postdoc. and the P.I. did was to quantify the role of subgrid vertical velocity variance on the activation and nucleation of cloud liquid droplets and ice crystals and its impact on the aerosol indirect effect in CAM5. First, we analyzed various LES cases (from dry stable to cloud-topped PBL) to check whether this isotropic turbulence assumption used in CAM5 is really valid. It turned out that this isotropic turbulence assumption is not universally valid. Consequently, from the analysis of LES, we derived an empirical formulation relaxing the isotropic turbulence assumption used for the CAM5 aerosol activation and ice nucleation, and implemented the empirical formulation into CAM5/CESM1, and tested in the single-column and global simulation modes, and examined how it changed aerosol indirect effects in the CAM5/CESM1. These results were reported in the poster section in the 18th Annual CESM workshop held in Breckenridge, CO during Jun.17-20.2013. While we derived an empirical formulation from the analysis of couple of LES from the first task, the general applicability of that empirical formulation was questionable, because it was obtained from the limited number of LES simulations. The second task we did was to derive a more fundamental analytical formulation relating vertical velocity variance to TKE using other information starting from basic physical principles. This was a somewhat challenging subject, but if this could be done in a successful way, it could be directly implemented into the CAM5 as a practical parameterization, and substantially contributes to achieving the project goal. Through an intensive research for about one year, we found appropriate mathematical formulation and tried to implement it into the CAM5 PBL and activation routine as a practical parameterized numerical code. During these processes, however, the Postdoc applied for another position in Sweden, Europe, and accepted a job offer there, and left NCAR in August 2014. In Sweden, Dr. Anna Fitch is still working on this subject in a part time, planning to finalize the research and to write the paper in a near future.
Combining weak-lensing tomography and spectroscopic redshift surveys
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Cai, Yan -Chuan; Bernstein, Gary
2012-05-11
Redshift space distortion (RSD) is a powerful way of measuring the growth of structure and testing General Relativity, but it is limited by cosmic variance and the degeneracy between galaxy bias b and the growth rate factor f. The cross-correlation of lensing shear with the galaxy density field can in principle measure b in a manner free from cosmic variance limits, breaking the f-b degeneracy and allowing inference of the matter power spectrum from the galaxy survey. We analyze the growth constraints from a realistic tomographic weak lensing photo-z survey combined with a spectroscopic galaxy redshift survey over the samemore » sky area. For sky coverage fsky = 0.5, analysis of the transverse modes measures b to 2-3% accuracy per Δz = 0.1 bin at z < 1 when ~10 galaxies arcmin–2 are measured in the lensing survey and all halos with M > Mmin = 1013h–1M⊙ have spectra. For the gravitational growth parameter parameter γ (f = Ωγm), combining the lensing information with RSD analysis of non-transverse modes yields accuracy σ(γ) ≈ 0.01. Adding lensing information to the RSD survey improves \\sigma(\\gamma) by an amount equivalent to a 3x (10x) increase in RSD survey area when the spectroscopic survey extends down to halo mass 1013.5 (1014) h–1 M⊙. We also find that the σ(γ) of overlapping surveys is equivalent to that of surveys 1.5-2 times larger if they are separated on the sky. This gain is greatest when the spectroscopic mass threshold is 1013 -1014 h–1 M⊙, similar to LRG surveys. The gain of overlapping surveys is reduced for very deep or very shallow spectroscopic surveys, but any practical surveys are more powerful when overlapped than when separated. As a result, the gain of overlapped surveys is larger in the case when the primordial power spectrum normalization is uncertain by > 0.5%.« less
Guest, Geoffrey Bright, Ryan M. Cherubini, Francesco Strmman, Anders H.
2013-11-15
Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxes in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily ? 1 CO{sub 2}eq per kg CO{sub 2} stored. As an example, when biogenic CO{sub 2} from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of ? 0.56 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of resource and carbon storage options considered indicates that more accurate accounting will require case-specific factors derived following the methodological guidelines provided in this and recent manuscripts. -- Highlights: Climate impacts of stored biogenic carbon (bio-C) are consistently quantified. Temporary storage of bio-C does not always equate to a climate cooling impact. 1 unit of bio-C stored over a time horizon does not always equate to ? 1 unit CO{sub 2}eq. Discrepancies of climate change impact quantification in literature are clarified.
Nazarian, Dalar; Ganesh, P.; Sholl, David S.
2015-09-30
We compiled a test set of chemically and topologically diverse Metal–Organic Frameworks (MOFs) with high accuracy experimentally derived crystallographic structure data. The test set was used to benchmark the performance of Density Functional Theory (DFT) functionals (M06L, PBE, PW91, PBE-D2, PBE-D3, and vdW-DF2) for predicting lattice parameters, unit cell volume, bonded parameters and pore descriptors. On average PBE-D2, PBE-D3, and vdW-DF2 predict more accurate structures, but all functionals predicted pore diameters within 0.5 Å of the experimental diameter for every MOF in the test set. The test set was also used to assess the variance in performance of DFT functionals for elastic properties and atomic partial charges. The DFT predicted elastic properties such as minimum shear modulus and Young's modulus can differ by an average of 3 and 9 GPa for rigid MOFs such as those in the test set. Moreover, we calculated the partial charges by vdW-DF2 deviate the most from other functionals while there is no significant difference between the partial charges calculated by M06L, PBE, PW91, PBE-D2 and PBE-D3 for the MOFs in the test set. We find that while there are differences in the magnitude of the properties predicted by the various functionals, these discrepancies are small compared to the accuracy necessary for most practical applications.
Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gao, Weimin; Francis, Arokiasamy J.
2013-01-01
Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction bymore » clostridia demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less
Alternative disposal options for alpha-mixed low-level waste
Loomis, G.G.; Sherick, M.J.
1995-12-01
This paper presents several disposal options for the Department of Energy alpha-mixed low-level waste. The mixed nature of the waste favors thermally treating the waste to either an iron-enriched basalt or glass waste form, at which point a multitude of reasonable disposal options, including in-state disposal, are a possibility. Most notably, these waste forms will meet the land-ban restrictions. However, the thermal treatment of this waste involves considerable waste handling and complicated/expensive offgas systems with secondary waste management problems. In the United States, public perception of offgas systems in the radioactive incinerator area is unfavorable. The alternatives presented here are nonthermal in nature and involve homogenizing the waste with cryogenic techniques followed by complete encapsulation with a variety of chemical/grouting agents into retrievable waste forms. Once encapsulated, the waste forms are suitable for transport out of the state or for actual in-state disposal. This paper investigates variances that would have to be obtained and contrasts the alternative encapsulation idea with the thermal treatment option.
A fast contour descriptor algorithm for supernova imageclassification
Aragon, Cecilia R.; Aragon, David Bradburn
2006-07-16
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
Standard Methods of Characterizing Performance of Fan FilterUnits, Version 3.0
Xu, Tengfang
2007-01-01
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
Talamo, A.; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.
2013-07-01
MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by {sup 3}He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-{alpha}, and Feynman-{alpha}. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Dwivedi, Gopal; Viswanathan, Vaishak; Sampath, Sanjay; Shyam, Amit; Lara-Curzio, Edgar
2014-06-09
Fracture toughness has become one of the dominant design parameters that dictates the selection of materials and their microstructure to obtain durable thermal barrier coatings (TBCs). Much progress has been made in characterizing the fracture toughness of relevant TBC compositions in bulk form, and it has become apparent that this property is significantly affected by process-induced microstructural defects. In this investigation, a systematic study of the influence of coating microstructure on the fracture toughness of atmospheric plasma sprayed (APS) TBCs has been carried out. Yttria partially stabilized zirconia (YSZ) coatings were fabricated under different spray process conditions inducing different levelsmore » of porosity and interfacial defects. Fracture toughness was measured on free standing coatings in as-processed and thermally aged conditions using the double torsion technique. Results indicate significant variance in fracture toughness among coatings with different microstructures including changes induced by thermal aging. Comparative studies were also conducted on an alternative TBC composition, Gd2Zr2O7 (GDZ), which as anticipated shows significantly lower fracture toughness compared to YSZ. Furthermore, the results from these studies not only point towards a need for process and microstructure optimization for enhanced TBC performance but also a framework for establishing performance metrics for promising new TBC compositions.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Deng, Xiuhao; Jia, Chunjing; Chien, Chih-Chun
2015-02-23
We report that the Bose Hubbard model (BHM) of interacting bosons in a lattice has been a paradigm in many-body physics, and it exhibits a Mott insulator (MI)-superfluid (SF) transition at integer filling. Here a quantum simulator of the BHM using a superconducting circuit is proposed. Specifically, a superconducting transmission line resonator supporting microwave photons is coupled to a charge qubit to form one site of the BHM, and adjacent sites are connected by a tunable coupler. To obtain a mapping from the superconducting circuit to the BHM, we focus on the dispersive regime where the excitations remain photonlike. Standardmore » perturbation theory is implemented to locate the parameter range where the MI-SF transition may be simulated. This simulator allows single-site manipulations and we illustrate this feature by considering two scenarios where a single-site manipulation can drive a MI-SF transition. The transition can be analyzed by mean-field analyses, and the exact diagonalization was implemented to provide accurate results. The variance of the photon density and the fidelity metric clearly show signatures of the transition. Lastly, experimental realizations and other possible applications of this simulator are also discussed.« less
Deng, Xiuhao; Jia, Chunjing; Chien, Chih-Chun
2015-02-23
We report that the Bose Hubbard model (BHM) of interacting bosons in a lattice has been a paradigm in many-body physics, and it exhibits a Mott insulator (MI)-superfluid (SF) transition at integer filling. Here a quantum simulator of the BHM using a superconducting circuit is proposed. Specifically, a superconducting transmission line resonator supporting microwave photons is coupled to a charge qubit to form one site of the BHM, and adjacent sites are connected by a tunable coupler. To obtain a mapping from the superconducting circuit to the BHM, we focus on the dispersive regime where the excitations remain photonlike. Standard perturbation theory is implemented to locate the parameter range where the MI-SF transition may be simulated. This simulator allows single-site manipulations and we illustrate this feature by considering two scenarios where a single-site manipulation can drive a MI-SF transition. The transition can be analyzed by mean-field analyses, and the exact diagonalization was implemented to provide accurate results. The variance of the photon density and the fidelity metric clearly show signatures of the transition. Lastly, experimental realizations and other possible applications of this simulator are also discussed.
Daytime turbulent exchange between the Amazon forest and the atmosphere
Fitzjarrald, D.R.; Moore, K.E. ); Cabral, M.R. ); Scolar, J. ); Manzi, A.O.; de Abreau Sa, L.D. )
1990-09-20
Detailed observations of turbulence just above and below the crown of the Amazon rain forest during the wet season are presented. The forest canopy is shown to remove high-frequency turbulent fluctuations while passing lower frequencies. Filter characteristics of turbulent transfer into the Amazon rain forest canopy are quantified. In spite of the ubiquitous presence of clouds and frequent rain during this season, the average horizontal wind speed spectrum and the relationship between the horizontal wind speed and its standard deviation are well described by dry convective boundary layer similarity hypotheses originally found to apply in flat terrain. Diurnal changes in the sign of the vertical velocity skewness observed above and inside the canopy are shown to be plausibly explained by considering the skewness budget. Simple empirical formulas that relate observed turbulent heat fluxes to horizontal wind speed and variance are presented. Changes in the amount of turbulent coupling between the forest and the boundary layer associated with deep convective clouds are presented in three case studies. Even small raining clouds are capable of evacuating the canopy of substances normally trapped by persistent static stability near the forest floor. Recovery from these events can take more than an hour, even during midday.
Tzvi Galchen; Mei Xu ); Eberhard, W.L. )
1992-11-30
This work is part of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), an international land-surface-atmosphere experiment aimed at improving the way climate models represent energy, water, heat, and carbon exchanges, and improving the utilization of satellite based remote sensing to monitor such parameters. Here the authors present results on doppler LIDAR measurements used to measure a range of turbulence parameters in the region of the unstable planetary boundary layer (PBL). The parameters include, averaged velocities, cartesian velocities, variances in velocities, parts of the covariance associated with vertical fluxes of horizontal momentum, and third moments of the vertical velocity. They explain their analysis technique, especially as it relates to error reduction of the averaged turbulence parameters from individual measurements with relatively large errors. The scales studied range from 150m to 12km. With this new diagnostic they address questions about the behavior of the convectively unstable PBL, as well as the stable layer which overlies it.
Chandra, A S; Kollias, P; Giangrande, S E; Klein, S A
2009-08-20
A long-term study of the turbulent structure of the convective boundary layer (CBL) at the U.S. Department of Energy Atmospheric Radiation Measurement Program (ARM) Southern Great Plains (SGP) Climate Research Facility is presented. Doppler velocity measurements from insects occupying the lowest 2 km of the boundary layer during summer months are used to map the vertical velocity component in the CBL. The observations cover four summer periods (2004-08) and are classified into cloudy and clear boundary layer conditions. Profiles of vertical velocity variance, skewness, and mass flux are estimated to study the daytime evolution of the convective boundary layer during these conditions. A conditional sampling method is applied to the original Doppler velocity dataset to extract coherent vertical velocity structures and to examine plume dimension and contribution to the turbulent transport. Overall, the derived turbulent statistics are consistent with previous aircraft and lidar observations. The observations provide unique insight into the daytime evolution of the convective boundary layer and the role of increased cloudiness in the turbulent budget of the subcloud layer. Coherent structures (plumes-thermals) are found to be responsible for more than 80% of the total turbulent transport resolved by the cloud radar system. The extended dataset is suitable for evaluating boundary layer parameterizations and testing large-eddy simulations (LESs) for a variety of surface and cloud conditions.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N
2014-06-01
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Griffin, Joshua D.; Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.; Ross, Kyle; Cardoni, Jeffrey N; Kalinich, Donald A.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Ghosh, S. Tina
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)
Simulation of winds as seen by a rotating vertical axis wind turbine blade
George, R.L.
1984-02-01
The objective of this report is to provide turbulent wind analyses relevant to the design and testing of Vertical Axis Wind Turbines (VAWT). A technique was developed for utilizing high-speed turbulence wind data from a line of seven anemometers at a single level to simulate the wind seen by a rotating VAWT blade. Twelve data cases, representing a range of wind speeds and stability classes, were selected from the large volume of data available from the Clayton, New Mexico, Vertical Plane Array (VPA) project. Simulations were run of the rotationally sampled wind speed relative to the earth, as well as the tangential and radial wind speeds, which are relative to the rotating wind turbine blade. Spectral analysis is used to compare and assess wind simulations from the different wind regimes, as well as from alternate wind measurement techniques. The variance in the wind speed at frequencies at or above the blade rotation rate is computed for all cases, and is used to quantitatively compare the VAWT simulations with Horizontal Axis Wind Turbine (HAWT) simulations. Qualitative comparisons are also made with direct wind measurements from a VAWT blade.
Monte Carlo analysis of localization errors in magnetoencephalography
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
Optimization of Micro Metal Injection Molding By Using Grey Relational Grade
Ibrahim, M. H. I. [Dept. Of Mechanical Engineering, Universiti Tun Hussein Onn Malaysia (UTHM), 86400 Parit Raja, Batu Pahat, Johor (Malaysia); Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia); Muhamad, N.; Sulong, A. B.; Nor, N. H. M.; Harun, M. R.; Murtadhahadi [Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia); Jamaludin, K. R. [UTM Razak School of Engineering and Advanced Technology, UTM International Campus, 54100 Jalan Semarak, Kuala Lumpur (Malaysia)
2011-01-17
Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising method towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where Taguchi method associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversion from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.
MAVTgsa: An R Package for Gene Set (Enrichment) Analysis
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Chien, Chih-Yi; Chang, Ching-Wei; Tsai, Chen-An; Chen, James J.
2014-01-01
Gene semore » t analysis methods aim to determine whether an a priori defined set of genes shows statistically significant difference in expression on either categorical or continuous outcomes. Although many methods for gene set analysis have been proposed, a systematic analysis tool for identification of different types of gene set significance modules has not been developed previously. This work presents an R package, called MAVTgsa, which includes three different methods for integrated gene set enrichment analysis. (1) The one-sided OLS (ordinary least squares) test detects coordinated changes of genes in gene set in one direction, either up- or downregulation. (2) The two-sided MANOVA (multivariate analysis variance) detects changes both up- and downregulation for studying two or more experimental conditions. (3) A random forests-based procedure is to identify gene sets that can accurately predict samples from different experimental conditions or are associated with the continuous phenotypes. MAVTgsa computes the P values and FDR (false discovery rate) q -value for all gene sets in the study. Furthermore, MAVTgsa provides several visualization outputs to support and interpret the enrichment results. This package is available online.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Chou, Wen-Chi; Ma, Qin; Yang, Shihui; Cao, Sha; Klingeman, Dawn M.; Brown, Steven D.; Xu, Ying
2015-03-12
The identification of transcription units (TUs) encoded in a bacterial genome is essential to elucidation of transcriptional regulation of the organism. To gain a detailed understanding of the dynamically composed TU structures, we have used four strand-specific RNA-seq (ssRNA-seq) datasets collected under two experimental conditions to derive the genomic TU organization of Clostridium thermocellum using a machine-learning approach. Our method accurately predicted the genomic boundaries of individual TUs based on two sets of parameters measuring the RNA-seq expression patterns across the genome: expression-level continuity and variance. A total of 2590 distinct TUs are predicted based on the four RNA-seq datasets.more » Moreover, among the predicted TUs, 44% have multiple genes. We assessed our prediction method on an independent set of RNA-seq data with longer reads. The evaluation confirmed the high quality of the predicted TUs. Functional enrichment analyses on a selected subset of the predicted TUs revealed interesting biology. To demonstrate the generality of the prediction method, we have also applied the method to RNA-seq data collected on Escherichia coli and achieved high prediction accuracies. The TU prediction program named SeqTU is publicly available athttps://code.google.com/p/seqtu/. We expect that the predicted TUs can serve as the baseline information for studying transcriptional and post-transcriptional regulation in C. thermocellum and other bacteria.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
Fuel cycle cost uncertainty from nuclear fuel cycle comparison
Li, J.; McNelis, D.; Yim, M.S.
2013-07-01
This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for the discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.
Pre-test CFD Calculations for a Bypass Flow Standard Problem
Rich Johnson
2011-11-01
The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.
Waliser, D; Sperber, K; Hendon, H; Kim, D; Maloney, E; Wheeler, M; Weickmann, K; Zhang, C; Donner, L; Gottschalck, J; Higgins, W; Kang, I; Legler, D; Moncrieff, M; Schubert, S; Stern, W; Vitart, F; Wang, B; Wang, W; Woolnough, S
2008-06-02
The Madden-Julian Oscillation (MJO) interacts with, and influences, a wide range of weather and climate phenomena (e.g., monsoons, ENSO, tropical storms, mid-latitude weather), and represents an important, and as yet unexploited, source of predictability at the subseasonal time scale. Despite the important role of the MJO in our climate and weather systems, current global circulation models (GCMs) exhibit considerable shortcomings in representing this phenomenon. These shortcomings have been documented in a number of multi-model comparison studies over the last decade. However, diagnosis of model performance has been challenging, and model progress has been difficult to track, due to the lack of a coherent and standardized set of MJO diagnostics. One of the chief objectives of the US CLIVAR MJO Working Group is the development of observation-based diagnostics for objectively evaluating global model simulations of the MJO in a consistent framework. Motivation for this activity is reviewed, and the intent and justification for a set of diagnostics is provided, along with specification for their calculation, and illustrations of their application. The diagnostics range from relatively simple analyses of variance and correlation, to more sophisticated space-time spectral and empirical orthogonal function analyses. These diagnostic techniques are used to detect MJO signals, to construct composite life-cycles, to identify associations of MJO activity with the mean state, and to describe interannual variability of the MJO.
Computaional Modeling of the Stability of Crevice Corrosion of Wetted SS316L
F. Cui; F.J. Presuel-Moreno; R.G. Kelly
2006-04-17
The stability of localized corrosion sites on SS 316L exposed to atmospheric conditions was studied computationally. The localized corrosion system was decoupled computationally by considering the wetted cathode and the crevice anode separately and linking them via a constant potential boundary condition at the mouth of the crevice. The potential of interest for stability was the repassivation potential. The limitations on the ability of the cathode that are inherent due to the restricted geometry were assessed in terms of the dependence on physical and electrochemical parameters. Physical parameters studied include temperature, electrolyte layer thickness, solution conductivity, and the size of the cathode, as well as the crevice gap for the anode. The current demand of the crevice was determined considering a constant crevice solution composition that simulates the critical crevice solution as described in the literature. An analysis of variance showed that the solution conductivity and the length of the cathode were the most important parameters in determining the total cathodic current capacity of the external surface. A semi-analytical equation was derived for the total current from a restricted geometry held at a constant potential at one end. The equation was able to reproduce all the model computation results both for the wetted external cathode and the crevice and give good explanation on the effects of physicochemical and kinetic parameters.
Prideaux, B.R.; Bayne, J.W.
1994-12-31
Obtaining the quality seismic data necessary to answer key exploration questions in the Altiplano Basin of Bolivia necessitated the use of turbo-charged vibrators, ground force control electronics, and state of the art processing techniques. Overcoming the structural complexity of the region, including steep surface dips (averaging 45{degree}--50{degree}), even steeper subsurface dips adjacent to areas of near-flat dip, as well as substantial surface variations, required optimal recording and processing parameters. A long far offset (3056.75 m) and a close trace spacing (12.5 m) was needed to acquire the most reliable data. Seven second records were also recorded to insure that information was acquired at depth. Several other factors helped account for an acquisition success for the project. A field computer system was used to quickly process brute and enhanced brute stacks, which provided greater quality control and allowed for in-field adjustments to optimize the acquisition parameters. Additionally, the processing of the data was able to minimize numerous problems. There was a high variance in the recorded data quality, mainly due to surface and near-surface conditions (statics), as well as a fairly high degree of background noise throughout. These noise problems eventually determined the processing sequence that was used. Some processes that were initially proposed deteriorated instead of enhanced the interpretability of the seismic data.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
A Semi-Preemptive Garbage Collector for Solid State Drives
Lee, Junghee; Kim, Youngjae; Shipman, Galen M; Oral, H Sarp; Wang, Feiyi; Kim, Jongman
2011-01-01
NAND flash memory is a preferred storage media for various platforms ranging from embedded systems to enterprise-scale systems. Flash devices do not have any mechanical moving parts and provide low-latency access. They also require less power compared to rotating media. Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governed by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptive GC scheme that can preempt on-going GC processing and service pending I/O requests in the queue. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semi-preemptive GC sheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptive GC scheme.
Tuor, N. R.; Schubert, A. L.
2002-02-26
Safely accelerating the closure of Rocky Flats to 2006 is a goal shared by many: the State of Colorado, the communities surrounding the site, the U.S. Congress, the Department of Energy, Kaiser-Hill and its team of subcontractors, the site's employees, and taxpayers across the country. On June 30, 2000, Kaiser-Hill (KH) submitted to the Department of Energy (DOE), KH's plan to achieve closure of Rocky Flats by December 15, 2006, for a remaining cost of $3.96 billion (February 1, 2000, to December 15, 2006). The Closure Project Baseline (CPB) is the detailed project plan for accomplishing this ambitious closure goal. This paper will provide a status report on the progress being made toward the closure goal. This paper will: provide a summary of the closure contract completion criteria; give the current cost and schedule variance of the project and the status of key activities; detail important accomplishments of the past year; and discuss the challenges ahead.
Performance of internal covariance estimators for cosmic shear correlation functions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Trace metal levels and partitioning in Wisconsin rivers: Results of background trace metals study
Shafer, M.M.; Overdier, J.T.; Armstrong, D.E.; Hurley, J.P.; Webb, D.A.
1994-12-31
Levels of total and filtrable Ag, Al, Cd, Cu, Pb, and Zn in 41 Wisconsin rivers draining watersheds of distinct homogeneous characteristics (land use/cover, soil type, surficial geology) were quantified. Levels, fluxes, and yields of trace metals are interpreted in terms of principal geochemical controls. The study samples were also used to evaluate the capability of modern ICP-MS techniques for ``background`` level quantification of metals. Order-of-magnitude variations in levels of a given metal between sites was measured. This large natural variance reflects influences of soil type, dissolved organic matter (DOC), ionic strength, and suspended particulate matter (SPM) on metal levels. Significant positive correlations between DOC levels and filtrable metal concentrations were observed, demonstrating the important role that DOC plays in metal speciation and behavior. Systematic, chemically consistent, differences in behavior between the metals is evident with partition coefficients (K,) and fraction in particulate forms ranking in the order: Al > Pb > Zn > Cr >Cd > Cu. Total metal yields correlate well with SPM yields, especially for highly partitioned elements, whereas filtrable metal yields reflect the interplay of partitioning and water yield. The State of Wisconsin will use these data in a re-evaluation of regulatory limits and in the development of water effects ratio criteria.
Hazardous waste identification: A guide to changing regulations
Stults, R.G. )
1993-03-01
The Resource Conservation and Recovery Act (RCRA) was enacting in 1976 and amended in 1984 by the Hazardous and Solid Waste Amendments (HSWA). Since then, federal regulations have generated a profusion of terms to identify and describe hazardous wastes. Regulations that5 define and govern management of hazardous wastes are codified in Title 40 of the code of Federal Regulations, Protection of the environment''. Title 40 regulations are divided into chapters, subchapters and parts. To be defined as hazardous, a waste must satisfy the definition of solid waste any discharged material not specifically excluded from regulation or granted a regulatory variance by the EPA Administrator. Some wastes and other materials have been identified as non-hazardous and are listed in 40 CFR 261.4(a) and 261.4(b). Certain wastes that satisfy the definition of hazardous waste nevertheless are excluded from regulation as hazardous if they meet specific criteria. Definitions and criteria for their exclusion are found in 40 CFR 261.4(c)-(f) and 40 CFR 261.5.
Goldman, A.S.
1985-05-01
This report documents and reviews the measurement control program (MCP) over a 27-month period for four solution assay instruments (SAIs) Facility. SAI measurement data collected during the period January 1982 through March 1984 were analyzed. The sources of these data included computer listings of measurements emanating from operator entries on computer terminals, logbook entries of measurements transcribed by operators, and computer listings of measurements recorded internally in the instruments. Data were also obtained from control charts that are available as part of the MCP. As a result of our analyses we observed agreement between propagated and historical variances and concluded instruments were functioning properly from a precision aspect. We noticed small, persistent biases indicating slight instrument inaccuracies. We suggest that statistical tests for bias be incorporated in the MCP on a monthly basis and if the instrument bias is significantly greater than zero, the instrument should undergo maintenance. We propose the weekly precision test be replaced by a daily test to provide more timely detection of possible problems. We observed that one instrument showed a trend of increasing bias during the past six months and recommend a randomness test be incorporated to detect trends in a more timely fashion. We detected operator transcription errors during data transmissions and advise direct instrument transmission to the MCP to eliminate these errors. A transmission error rate based on those errors that affected decisions in the MCP was estimated as 1%. 11 refs., 10 figs., 4 tabs.
Narlesky, Joshua Edward; Kelly, Elizabeth J.
2015-09-10
This report documents the new PG calibration regression equation. These calibration equations incorporate new data that have become available since revision 1 of “A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis” was issued [3] The calibration equations are based on a weighted least squares (WLS) approach for the regression. The WLS method gives each data point its proper amount of influence over the parameter estimates. This gives two big advantages, more precise parameter estimates and better and more defensible estimates of uncertainties. The WLS approach makes sense both statistically and experimentally because the variances increase with concentration, and there are physical reasons that the higher measurements are less reliable and should be less influential. The new magnesium calibration includes a correction for sodium and separate calibration equation for items with and without chlorine. These additional calibration equations allow for better predictions and smaller uncertainties for sodium in materials with and without chlorine. Chlorine and sodium have separate equations for RICH materials. Again, these equations give better predictions and smaller uncertainties chlorine and sodium for RICH materials.
Stepan, D.J.; Fraley, R.H.; Charlton, D.S.
1994-02-01
The release of elemental mercury into the environment from manometers that are used in the measurement of natural gas flow through pipelines has created a potentially serious problem for the gas industry. Regulations, particularly the Land Disposal Restrictions (LDR), have had a major impact on gas companies dealing with mercury-contaminated soils. After the May 8, 1993, LDR deadline extension, gas companies were required to treat mercury-contaminated soils by designated methods to specified levels prior to disposal in landfills. In addition, gas companies must comply with various state regulations that are often more stringent than the LDR. The gas industry is concerned that the LDRs do not allow enough viable options for dealing with their mercury-related problems. The US Environmental Protection Agency has specified the Best Demonstrated Available Technology (BDAT) as thermal roasting or retorting. However, the Agency recognizes that treatment of certain wastes to the LDR standards may not always be achievable and that the BDAT used to set the standard may be inappropriate. Therefore, a Treatability Variance Process for remedial actions was established (40 Code of Federal Regulations 268.44) for the evaluation of alternative remedial technologies. This report presents evaluations of demonstrations for three different remedial technologies: a pilot-scale portable thermal treatment process, a pilot-scale physical separation process in conjunction with chemical leaching, and a bench-scale chemical leaching process.
The Role of Landscape in the Distribution of Deer-Vehicle Collisions in South Mississippi
McKee, Jacob J; Cochran, David
2012-01-01
Deer-vehicle collisions (DVCs) have a negative impact on the economy, traffic safety, and the general well-being of otherwise healthy deer. To mitigate DVCs, it is imperative to gain a better understanding of factors that play a role in their spatial distribution. Much of the existing research on DVCs in the United States has been inconclusive, pointing to a variety of causal factors that seem more specific to study site and region than indicative of broad patterns. Little DVC research has been conducted in the southern United States, making the region particularly important with regard to this issue. In this study, we evaluate landscape factors that contributed to the distribution of 347 DVCs that occurred in Forrest and Lamar Counties of south Mississippi, from 2006 to 2009. Using nearest-neighbor and discriminant analysis, we demonstrate that DVCs in south Mississippi are not random spatial phenomena. We also develop a classification model that identified seven landscape metrics, explained 100% of the variance, and could distinguish DVCs from control sites with an accuracy of 81.3 percent.
CHARACTERIZATION OF TRANSITIONS IN THE SOLAR WIND PARAMETERS
Perri, S.; Balogh, A. E-mail: a.balogh@imperial.ac.u
2010-02-20
The distinction between fast and slow solar wind streams and the dynamically evolved interaction regions is reflected in the characteristic fluctuations of both the solar wind and the embedded magnetic field. High-resolution magnetic field data from the Ulysses spacecraft have been analyzed. The observations show rapid variations in the magnetic field components and in the magnetic field strength, suggesting a structured nature of the solar wind at small scales. The typical sizes of fluctuations cover a broad range. If translated to the solar surface, the scales span from the size of granules ({approx}10{sup 3} km) and supergranules ({approx}10{sup 4} km) on the Sun down to {approx}10{sup 2} km and less. The properties of the short time structures change in the different types of solar wind. While fluctuations in fast streams are more homogeneous, slow streams present a bursty behavior in the magnetic field variances, and the regions of transition are characterized by high levels of power in narrow structures around the transitions. The probability density functions of the magnetic field increments at several scales reveal a higher level of intermittency in the mixed streams, which is related to the presence of well localized features. It is concluded that, apart from the differences in the nature of fluctuations in flows of different coronal origin, there is a small-scale structuring that depends on the origin of streams themselves but it is also related to a bursty generation of the fluctuations.
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Zhao, Chun; Lin, Guang; Zhou, Tianjun
2014-09-01
In this study, we investigate the sensitivity of simulated shallow cumulus and stratocumulus clouds to selected tunable parameters of Cloud Layers Unified by Binormals (CLUBB) in the single column version of Community Atmosphere Model version 5 (SCAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is adopted to study the responses of simulated cloud fields to tunable parameters. One stratocumulus and two shallow convection cases are configured at both coarse and fine vertical resolutions in this study.. Our results show that most of the variance in simulated cloud fields can be explained by a small number of tunable parameters. The parameters related to Newtonian and buoyancy-damping terms of total water flux are found to be the most influential parameters for stratocumulus. For shallow cumulus, the most influential parameters are those related to skewness of vertical velocity, reflecting the strong coupling between cloud properties and dynamics in this regime. The influential parameters in the stratocumulus case are sensitive to the choice of the vertical resolution while little sensitivity is found for the shallow convection cases, as eddy mixing length (or dissipation time scale) plays a more important role and depends more strongly on the vertical resolution in stratocumulus than in shallow convections. The influential parameters remain almost unchanged when the number of tunable parameters increases from 16 to 35. This study improves understanding of the CLUBB behavior associated with parameter uncertainties.
Sailer, S.J.
1996-08-01
This Quality Assurance Project Plan (QAPJP) specifies the quality of data necessary and the characterization techniques employed at the Idaho National Engineering Laboratory (INEL) to meet the objectives of the Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP) Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) requirements. This QAPJP is written to conform with the requirements and guidelines specified in the QAPP and the associated documents referenced in the QAPP. This QAPJP is one of a set of five interrelated QAPjPs that describe the INEL Transuranic Waste Characterization Program (TWCP). Each of the five facilities participating in the TWCP has a QAPJP that describes the activities applicable to that particular facility. This QAPJP describes the roles and responsibilities of the Idaho Chemical Processing Plant (ICPP) Analytical Chemistry Laboratory (ACL) in the TWCP. Data quality objectives and quality assurance objectives are explained. Sample analysis procedures and associated quality assurance measures are also addressed; these include: sample chain of custody; data validation; usability and reporting; documentation and records; audits and 0385 assessments; laboratory QC samples; and instrument testing, inspection, maintenance and calibration. Finally, administrative quality control measures, such as document control, control of nonconformances, variances and QA status reporting are described.
Non-parametric transformation for data correlation and integration: From theory to practice
Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon
1997-08-01
The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.
Land Disposal Restrictions (LDR) program overview
Not Available
1993-04-01
The Hazardous and Solid Waste Amendments (HSWA) to the Resource Conservation and Recovery Act (RCRA) enacted in 1984 required the Environmental Protection Agency (EPA) to evaluate all listed and characteristic hazardous wastes according to a strict schedule and to develop requirements by which disposal of these wastes would be protective of human health and the environment. The implementing regulations for accomplishing this statutory requirement are established within the Land Disposal Restrictions (LDR) program. The LDR regulations (40 CFR Part 268) impose significant requirements on waste management operations and environmental restoration activities at DOE sites. For hazardous wastes restricted by statute from land disposal, EPA is required to set levels or methods of treatment that substantially reduce the waste`s toxicity or the likelihood that the waste`s hazardous constituents will migrate. Upon the specified LDR effective dates, restricted wastes that do not meet treatment standards are prohibited from land disposal unless they qualify for certain variances or exemptions. This document provides an overview of the LDR Program.
STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-02-20
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.
Nair, Sinitha B. E-mail: anithakklm@gmail.com; Abraham, Anitha E-mail: anithakklm@gmail.com; Philip, Rachel Reena; Pradeep, B.; Shripathi, T. E-mail: vganesancsr@gmail.com; Ganesan, V. E-mail: vganesancsr@gmail.com
2014-10-15
Cadmium Lead Sulphide thin films with systematic variation in Cd/Pb ratio are prepared at 333K by CBD, adjusting the reagent-molarity, deposition time and pH. XRD exhibits crystalline-amorphous transition as Cd% exceeds Pb%. AFM shows agglomeration of crystallites of size ∼50±5 nm. EDAX assess the composition whereas XPS ascertains the ternary formation, with binding energies of Pb4f{sub 7/2} and 4f{sub 5/2}, Cd3d{sub 5/2} and 3d{sub 3/2} and S2p at 137.03, 141.606, 404.667, 412.133 and 160.218 eV respectively. The optical absorption spectra reveal the variance in the direct allowed band gaps, from 1.57eV to 2.42 eV as Cd/Pb ratio increases from 0.2 to 2.7, suggesting possibility of band gap engineering in the n-type films.
MPACT Fast Neutron Multiplicity System Prototype Development
D.L. Chichester; S.A. Pozzi; J.L. Dolan; M.T. Kinlaw; S.J. Thompson; A.C. Kaplan; M. Flaska; A. Enqvist; J.T. Johnson; S.M. Watson
2013-09-01
This document serves as both an FY2103 End-of-Year and End-of-Project report on efforts that resulted in the design of a prototype fast neutron multiplicity counter leveraged upon the findings of previous project efforts. The prototype design includes 32 liquid scintillator detectors with cubic volumes 7.62 cm in dimension configured into 4 stacked rings of 8 detectors. Detector signal collection for the system is handled with a pair of Struck Innovative Systeme 16-channel digitizers controlled by in-house developed software with built-in multiplicity analysis algorithms. Initial testing and familiarization of the currently obtained prototype components is underway, however full prototype construction is required for further optimization. Monte Carlo models of the prototype system were performed to estimate die-away and efficiency values. Analysis of these models resulted in the development of a software package capable of determining the effects of nearest-neighbor rejection methods for elimination of detector cross talk. A parameter study was performed using previously developed analytical methods for the estimation of assay mass variance for use as a figure-of-merit for system performance. A software package was developed to automate these calculations and ensure accuracy. The results of the parameter study show that the prototype fast neutron multiplicity counter design is very nearly optimized under the restraints of the parameter space.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m^{2}) and longwave cloud forcing (~5 W/m^{2}) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.
Optimizing weak lensing mass estimates for cluster profile uncertainty
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.
2011-09-11
Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Dependingmore » on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less
Fruth, T.; Cabrera, J.; Csizmadia, Sz.; Eigmueller, P.; Erikson, A.; Kirste, S.; Pasternacki, T.; Rauer, H.; Titz-Weider, R.; Kabath, P.; Chini, R.; Lemke, R.; Murphy, M.
2012-06-15
The CoRoT field LRa02 has been observed with the Berlin Exoplanet Search Telescope II (BEST II) during the southern summer 2007/2008. A first analysis of stellar variability led to the publication of 345 newly discovered variable stars. Now, a deeper analysis of this data set was used to optimize the variability search procedure. Several methods and parameters have been tested in order to improve the selection process compared to the widely used J index for variability ranking. This paper describes an empirical approach to treat systematic trends in photometric data based upon the analysis of variance statistics that can significantly decrease the rate of false detections. Finally, the process of reanalysis and method improvement has virtually doubled the number of variable stars compared to the first analysis by Kabath et al. A supplementary catalog of 272 previously unknown periodic variables plus 52 stars with suspected variability is presented. Improved ephemerides are given for 19 known variables in the field. In addition, the BEST II results are compared with CoRoT data and its automatic variability classification.
SIMPLIFIED PHYSICS BASED MODELSRESEARCH TOPICAL REPORT ON TASK #2
Mishra, Srikanta; Ganesh, Priya
2014-10-31
We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.
On the equivalence of the RTI and SVM approaches to time correlated analysis
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Croft, S.; Favalli, A.; Henzlova, D.; Santi, P. A.
2014-11-21
Recently two papers on how to perform passive neutron auto-correlation analysis on time gated histograms formed from pulse train data, generically called time correlation analysis (TCA), have appeared in this journal [1,2]. For those of us working in international nuclear safeguards these treatments are of particular interest because passive neutron multiplicity counting is a widely deployed technique for the quantification of plutonium. The purpose of this letter is to show that the skewness-variance-mean (SVM) approach developed in [1] is equivalent in terms of assay capability to the random trigger interval (RTI) analysis laid out in [2]. Mathematically we could alsomore » use other numerical ways to extract the time correlated information from the histogram data including for example what we might call the mean, mean square, and mean cube approach. The important feature however, from the perspective of real world applications, is that the correlated information extracted is the same, and subsequently gets interpreted in the same way based on the same underlying physics model.« less
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John; Miller, Eric; Solomon, Clell J. Jr.; Dennis, Ben; Meldrum, Amy; Clarke, Shaun; Pozzi, Sara
2012-06-21
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
Three-dimensional hydrodynamics of the deceleration stage in inertial confinement fusion
Weber, C. R. Clark, D. S.; Cook, A. W.; Eder, D. C.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Jones, O. S.; Marinak, M. M.; Milovich, J. L.; Patel, P. K.; Robey, H. F.; Salmonson, J. D.; Sepke, S. M.; Thomas, C. A.
2015-03-15
The deceleration stage of inertial confinement fusion implosions is modeled in detail using three-dimensional simulations designed to match experiments at the National Ignition Facility. In this final stage of the implosion, shocks rebound from the center of the capsule, forming the high-temperature, low-density hot spot and slowing the incoming fuel. The flow field that results from this process is highly three-dimensional and influences many aspects of the implosion. The interior of the capsule has high-velocity motion, but viscous effects limit the range of scales that develop. The bulk motion of the hot spot shows qualitative agreement with experimental velocity measurements, while the variance of the hot spot velocity would broaden the DT neutron spectrum, increasing the inferred temperature by 400800?eV. Jets of ablator material are broken apart and redirected as they enter this dynamic hot spot. Deceleration stage simulations using two fundamentally different rad-hydro codes are compared and the flow field is found to be in good agreement.
Localization-Delocalization Transition in a System of Quantum Kicked Rotors
Creffield, C.E.; Hur, G.; Monteiro, T.S. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)
2006-01-20
The quantum dynamics of atoms subjected to pairs of closely spaced {delta} kicks from optical potentials are shown to be quite different from the well-known paradigm of quantum chaos, the single {delta}-kick system. We find the unitary matrix has a new oscillating band structure corresponding to a cellular structure of phase space and observe a spectral signature of a localization-delocalization transition from one cell to several. We find that the eigenstates have localization lengths which scale with a fractional power L{approx}({Dirac_h}/2{pi}){sup -0.75} and obtain a regime of near-linear spectral variances which approximate the 'critical statistics' relation {sigma}{sub 2}(L){approx_equal}{chi}L{approx_equal}(1/2)(1-{nu})L, where {nu}{approx_equal}0.75 is related to the fractal classical phase-space structure. The origin of the {nu}{approx_equal}0.75 exponent is analyzed.
Convergence of statistical moments of particle density time series in scrape-off layer plasmas
Kube, R. Garcia, O. E.
2015-01-15
Particle density fluctuations in the scrape-off layer of magnetically confined plasmas, as measured by gas-puff imaging or Langmuir probes, are modeled as the realization of a stochastic process in which a superposition of pulses with a fixed shape, an exponential distribution of waiting times, and amplitudes represents the radial motion of blob-like structures. With an analytic formulation of the process at hand, we derive expressions for the mean squared error on estimators of sample mean and sample variance as a function of sample length, sampling frequency, and the parameters of the stochastic process. Employing that the probability distribution function of a particularly relevant stochastic process is given by the gamma distribution, we derive estimators for sample skewness and kurtosis and expressions for the mean squared error on these estimators. Numerically, generated synthetic time series are used to verify the proposed estimators, the sample length dependency of their mean squared errors, and their performance. We find that estimators for sample skewness and kurtosis based on the gamma distribution are more precise and more accurate than common estimators based on the method of moments.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.
Rising, M. E.; Prinja, A. K.
2012-07-01
A critical neutron transport problem with random material properties is introduced. The total cross section and the average neutron multiplicity are assumed to be uncertain, characterized by the mean and variance with a log-normal distribution. The average neutron multiplicity and the total cross section are assumed to be uncorrected and the material properties for differing materials are also assumed to be uncorrected. The principal component analysis method is used to decompose the covariance matrix into eigenvalues and eigenvectors and then 'realizations' of the material properties can be computed. A simple Monte Carlo brute force sampling of the decomposed covariance matrix is employed to obtain a benchmark result for each test problem. In order to save computational time and to characterize the moments and probability density function of the multiplication factor the polynomial chaos expansion method is employed along with the stochastic collocation method. A Gauss-Hermite quadrature set is convolved into a multidimensional tensor product quadrature set and is successfully used to compute the polynomial chaos expansion coefficients of the multiplication factor. Finally, for a particular critical fuel pin assembly the appropriate number of random variables and polynomial expansion order are investigated. (authors)
One-electron reduced density matrices of strongly correlated harmonium atoms
Cioslowski, Jerzy
2015-03-21
Explicit asymptotic expressions are derived for the reduced one-electron density matrices (the 1-matrices) of strongly correlated two- and three-electron harmonium atoms in the ground and first excited states. These expressions, which are valid at the limit of small confinement strength ?, yield electron densities and kinetic energies in agreement with the published values. In addition, they reveal the ?{sup 5/6} asymptotic scaling of the exchange components of the electron-electron repulsion energies that differs from the ?{sup 2/3} scaling of their Coulomb and correlation counterparts. The natural orbitals of the totally symmetric ground state of the two-electron harmonium atom are found to possess collective occupancies that follow a mixed power/Gaussian dependence on the angular momentum in variance with the simple power-law prediction of Hills asymptotics. Providing rigorous constraints on energies as functionals of 1-matrices, these results are expected to facilitate development of approximate implementations of the density matrix functional theory and ensure their proper description of strongly correlated systems.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
D'Addato, Sergio; Spadaro, Maria Chiara; Luches, Paola; Valeri, Sergio; Grillo, Vincenzo; Rotunno, Enzo; Roldan Gutierrez, Manuel A.; Pennycook, Stephen J.; Ferretti, Anna Maria; Capetti, Elena; et al
2015-01-01
Films of magnetic Ni@NiO core–shell nanoparticles (NPs, core diameter d ≅ 12 nm, nominal shell thickness variable between 0 and 6.5 nm) obtained with sequential layer deposition were investigated, to gain insight into the relationships between shell thickness/morphology, core-shell interface, and magnetic properties. Different values of NiO shell thickness ts could be obtained while keeping the Ni core size fixed, at variance with conventional oxidation procedures where the oxide shell is grown at the expense of the core. Chemical composition, morphology of the as-produced samples and structural features of the Ni/NiO interface were investigated with x-ray photoelectron spectroscopy and microscopymore » (scanning electron microscopy, transmission electron microscopy) techniques, and related with results from magnetic measurements obtained with a superconducting quantum interference device. The effect of the shell thickness on the magnetic properties could be studied. The exchange bias (EB) field Hbias is small and almost constant for ts up to 1.6 nm; then it rapidly grows, with no sign of saturation. This behavior is clearly related to the morphology of the top NiO layer, and is mostly due to the thickness dependence of the NiO anisotropy constant. The ability to tune the EB effect by varying the thickness of the last NiO layer represents a step towards the rational design and synthesis of core–shell NPs with desired magnetic properties.« less
Single-qubit tests of Bell-like inequalities
Zela, F. de
2007-10-15
This paper discusses some tests of Bell-like inequalities not requiring entangled states. The proposed tests are based on consecutive measurements on a single qubit. Available hidden-variable models for a single qubit [see, e.g., J. S. Bell, Rev. Mod. Phys. 38, 447 (1966)] reproduce the predictions of quantum mechanics and hence violate the Bell-like inequalities addressed in this paper. It is shown how this fact is connected with the state 'collapse' and with its random nature. Thus, it becomes possible to test truly realistic and deterministic hidden-variable models. In this way, it can be shown that a hidden-variable model should entail at least one of the following features: (i) nonlocality, (ii) contextuality, or (iii) discontinuous measurement-dependent probability functions. The last two features are put to the test with the experiments proposed in this paper. A hidden-variable model that is noncontextual and deterministic would be at variance with some predictions of quantum mechanics. Furthermore, the proposed tests are more likely to be loophole-free, as compared to former ones.
Testing of nuclear grade lubricants and their effects on A540 B24 and A193 B7 bolting materials
Czajkowski, C.J.
1985-01-01
An investigation was performed on eleven commonly used lubricants by the nuclear power industry. The investigation included EDS analysis of the lubricants, notched-tensile constant extension rate testing of bolting materials with the lubricants, frictional testing of the lubricants and weight loss testing of a bonded solid film lubricant. The report generally concludes that there is a significant amount of variance in the mechanical properties of common bolting materials; that MoS/sub 2/ can hydrolyze to form H/sub 2/S at 100/sup 0/C and cause stress corrosion cracking (SCC) of bolting materials, and that the use of copper-containing lubricants can be potentially detrimental to high strength steels in an aqueous environment. Additionally, the testing of various lubricants disclosed that some lubricants contain potentially detrimental elements (e.g. S, Sb) which can promote SCC of the common bolting materials. One of the most significant findings of this report is the observation that both A193 B7 and A540 B24 bolting materials are susceptible to transgranular stress corrosion cracking in demineralized H/sub 2/O at 280/sup 0/C in notched tensile tests.
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Greg J. Shott, Vefa Yucel, Lloyd Desotell; Non-Nstec Authors: G. Pyles and Jon Carilli
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
Tracking stochastic resonance curves using an assisted reference model
Caldern Ramrez, Mario; Rico Martnez, Ramiro; Parmananda, P.
2015-06-15
The optimal noise amplitude for Stochastic Resonance (SR) is located employing an Artificial Neural Network (ANN) reference model with a nonlinear predictive capability. A modified Kalman Filter (KF) was coupled to this reference model in order to compensate for semi-quantitative forecast errors. Three manifestations of stochastic resonance, namely, Periodic Stochastic Resonance (PSR), Aperiodic Stochastic Resonance (ASR), and finally Coherence Resonance (CR) were considered. Using noise amplitude as the control parameter, for the case of PSR and ASR, the cross-correlation curve between the sub-threshold input signal and the system response is tracked. However, using the same parameter the Normalized Variance curve is tracked for the case of CR. The goal of the present work is to track these curves and converge to their respective extremal points. The ANN reference model strategy captures and subsequently predicts the nonlinear features of the model system while the KF compensates for the perturbations inherent to the superimposed noise. This technique, implemented in the FitzHugh-Nagumo model, enabled us to track the resonance curves and eventually locate their optimal (extremal) values. This would yield the optimal value of noise for the three manifestations of the SR phenomena.
Technologies for Production of Heat and Electricity
Jacob J. Jacobson; Kara G. Cafferty
2014-04-01
Biomass is a desirable source of energy because it is renewable, sustainable, widely available throughout the world, and amenable to conversion. Biomass is composed of cellulose, hemicellulose, and lignin components. Cellulose is generally the dominant fraction, representing about 40 to 50% of the material by weight, with hemicellulose representing 20 to 50% of the material, and lignin making up the remaining portion [4,5,6]. Although the outward appearance of the various forms of cellulosic biomass, such as wood, grass, municipal solid waste (MSW), or agricultural residues, is different, all of these materials have a similar cellulosic composition. Elementally, however, biomass varies considerably, thereby presenting technical challenges at virtually every phase of its conversion to useful energy forms and products. Despite the variances among cellulosic sources, there are a variety of technologies for converting biomass into energy. These technologies are generally divided into two groups: biochemical (biological-based) and thermochemical (heat-based) conversion processes. This chapter reviews the specific technologies that can be used to convert biomass to energy. Each technology review includes the description of the process, and the positive and negative aspects.
Edie, P.C.
1981-01-01
This report is intended to supply the electric vehicle manufacturer with performance data on the General Electric 5BT 2366C10 series wound dc motor and EV-1 chopper controller. Data are provided for both straight and chopped dc input to the motor, at 2 motor temperature levels. Testing was done at 6 voltage increments to the motor, and 2 voltage increments to the controller. Data results are presented in both tabular and graphical forms. Tabular information includes motor voltage and current input data, motor speed and torque output data, power data and temperature data. Graphical information includes torque-speed, motor power output-speed, torque-current, and efficiency-speed plots under the various operating conditions. The data resulting from this testing shows the speed-torque plots to have the most variance with operating temperature. The maximum motor efficiency is between 86% and 87%, regardless of temperature or mode of operation. When the chopper is utilized, maximum motor efficiency occurs when the chopper duty cycle approaches 100%. At low duty cycles the motor efficiency may be considerably less than the efficiency for straight dc. Chopper efficiency may be assummed to be 95% under all operating conditions. For equal speeds at a given voltage level, the motor operated in the chopped mode develops slightly more torque than it does in the straight dc mode. System block diagrams are included, along with test setup and procedure information.
EXPECTED LARGE SYNOPTIC SURVEY TELESCOPE (LSST) YIELD OF ECLIPSING BINARY STARS
Prsa, Andrej; Pepper, Joshua; Stassun, Keivan G.
2011-08-15
In this paper, we estimate the Large Synoptic Survey Telescope (LSST) yield of eclipsing binary stars, which will survey {approx}20,000 deg{sup 2} of the southern sky during a period of 10 years in six photometric passbands to r {approx} 24.5. We generate a set of 10,000 eclipsing binary light curves sampled to the LSST time cadence across the whole sky, with added noise as a function of apparent magnitude. This set is passed to the analysis-of-variance period finder to assess the recoverability rate for the periods, and the successfully phased light curves are passed to the artificial-intelligence-based pipeline ebai to assess the recoverability rate in terms of the eclipsing binaries' physical and geometric parameters. We find that, out of {approx}24 million eclipsing binaries observed by LSST with a signal-to-noise ratio >10 in mission lifetime, {approx}28% or 6.7 million can be fully characterized by the pipeline. Of those, {approx}25% or 1.7 million will be double-lined binaries, a true treasure trove for stellar astrophysics.
Genetic studies of DRD4 and clinical response to neuroleptic medications
Kennedy, J.L.; Petronis, A.; Gao, J.
1994-09-01
Clozapine is an atypical antipsychotic drug that, like most other medications, is effective for some people and not for others. This variable response across individuals is likely significantly determined by genetic factors. An important candidate gene to investigate in clozapine response is the dopamine D4 receptor gene (DRD4). The D4 receptor has a higher affinity for clozapine than any of the other dopamine receptors. Furthermore, recent work by our consortium has shown a remarkable level of variability in the part of the gene coding for the third cytoplasmic loop. We have also identified polymorphisms in the upstream 5{prime} putative regulatory region and at two other sites. These polymorphisms were typed in a group of treatment-resistant schizophrenia subjects who were subsequently placed on clozapine (n = 60). In a logistic regression analysis, we compared genotype at the DRD4 polymorphism to response versus non-response to clozapine. Neither the exon-III nor any of the 5{prime} polymorphisms alone significantly predicted response; however, when the information from these polymorphisms was combined, more predictive power was obtained. In a correspondence analysis of the four DRD4 polymorphisms vs. response, we were able to predict 76% of the variance in response. Refinement of the analyses will include assessment of subfactors involved in clinical response phenotype and incorporation of the debrisoquine metabolizing locus (CYP2D6) into the prediction algorithm.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Characteristics of surface current flow inferred from a global ocean current data set
Meehl, G.A.
1982-06-01
A seasonal global ocean-current data set (OCDS) digitized on a 5/sup 0/ grid from long-term mean shipdrift-derived currents from pilot charts is presented and described. Annual zonal means of v-component currents show subtropical convergence zones which moved closest to the equator during the respective winters in each hemisphere. Net annual v-component surface flow at the equator is northward. Zonally average u-component currents have greatest seasonal variance in the tropics with strongest westward currents in the winter hemisphere. An ensemble of ocean currents measured by buoys and current meters compares favorably with OCDS data in spite of widely varying time and space scales. The OCDS currents and directly measured currents are about twice as large as computed geostrophic currents. An analysis of equatorial Pacific currents suggests that dynamic topography and sea-level change indicative of the geostrophic flow component cannot be relied on solely to infer absolute strength of surface currents which include a strong Ekman component. Comparison of OCDS v-component currents and meridional transports predicted by Ekman theory shows agreement in the sign of transports in the midlatitudes and tropics in both hemispheres. Ekman depths required to scale OCDS v-component currents to computed Ekman transports are reasonable at most latitudes with layer depths deepening closer to the equator.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kleinman, Lawrence I.; Kuang, Chongai; Sedlacek, Art; Senum, Gunnar I.; Springston, Stephen R.; Wang, Jian; Zhang, Qi; Jayne, John T.; Fast, Jerome D.; Hubbe, John M.; et al
2016-02-15
During the Carbonaceous Aerosols and Radiative Effects Study (CARES) the DOE G-1 aircraft was used to sample aerosol and gas phase compounds in the Sacramento, CA plume and surrounding region. We present data from 66 plume transects obtained during 13 flights in which southwesterly winds transported the plume towards the foothills of the Sierra Nevada Mountains. Plume transport occurred partly over land with high isoprene emission rates. Our objective is to empirically determine whether organic aerosol (OA) can be attributed to anthropogenic or biogenic sources, and to determine whether there is a synergistic effect whereby OA concentrations are enhanced bymore » the simultaneous presence of high concentrations of CO and either isoprene, MVK+MACR (sum of methyl vinyl ketone and methacrolein) or methanol, which are taken as tracers of anthropogenic and biogenic emissions. Furthermore, linear and bi-linear correlations between OA, CO, and each of three biogenic tracers, “Bio”, for individual plume transects indicate that most of the variance in OA over short time and distance scales can be explained by CO.« less
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Cosmic Shear Measurements with DES Science Verification Data
Becker, M. R.
2015-07-20
We present measurements of weak gravitational lensing cosmic shear two-point statistics using Dark Energy Survey Science Verification data. We demonstrate that our results are robust to the choice of shear measurement pipeline, either ngmix or im3shape, and robust to the choice of two-point statistic, including both real and Fourier-space statistics. Our results pass a suite of null tests including tests for B-mode contamination and direct tests for any dependence of the two-point functions on a set of 16 observing conditions and galaxy properties, such as seeing, airmass, galaxy color, galaxy magnitude, etc. We use a large suite of simulations to compute the covariance matrix of the cosmic shear measurements and assign statistical significance to our null tests. We find that our covariance matrix is consistent with the halo model prediction, indicating that it has the appropriate level of halo sample variance. We also compare the same jackknife procedure applied to the data and the simulations in order to search for additional sources of noise not captured by the simulations. We find no statistically significant extra sources of noise in the data. The overall detection significance with tomography for our highest source density catalog is 9.7σ. Cosmological constraints from the measurements in this work are presented in a companion paper (DES et al. 2015).
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
PHOTOSPHERIC EMISSION FROM STRATIFIED JETS
Ito, Hirotaka; Nagataki, Shigehiro; Ono, Masaomi; Lee, Shiu-Hang; Mao, Jirong; Yamada, Shoichi; Pe'er, Asaf; Mizuta, Akira; Harikae, Seiji
2013-11-01
We explore photospheric emissions from stratified two-component jets, wherein a highly relativistic spine outflow is surrounded by a wider and less relativistic sheath outflow. Thermal photons are injected in regions of high optical depth and propagated until the photons escape at the photosphere. Because of the presence of shear in velocity (Lorentz factor) at the boundary of the spine and sheath region, a fraction of the injected photons are accelerated using a Fermi-like acceleration mechanism such that a high-energy power-law tail is formed in the resultant spectrum. We show, in particular, that if a velocity shear with a considerable variance in the bulk Lorentz factor is present, the high-energy part of observed gamma-ray bursts (GRBs) photon spectrum can be explained by this photon acceleration mechanism. We also show that the accelerated photons might also account for the origin of the extra-hard power-law component above the bump of the thermal-like peak seen in some peculiar bursts (e.g., GRB 090510, 090902B, 090926A). We demonstrate that time-integrated spectra can also reproduce the low-energy spectrum of GRBs consistently using a multi-temperature effect when time evolution of the outflow is considered. Last, we show that the empirical E{sub p}-L{sub p} relation can be explained by differences in the outflow properties of individual sources.
atl?, Serap; Tan?r, Gne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Daily, Charles R.
2015-10-01
An assessment of the impact on the High Flux Isotope Reactor (HFIR) reactor vessel (RV) displacements-per-atom (dpa) rates due to operations with the proposed low enriched uranium (LEU) core described by Ilas and Primm has been performed and is presented herein. The analyses documented herein support the conclusion that conversion of HFIR to low-enriched uranium (LEU) core operations using the LEU core design of Ilas and Primm will have no negative impact on HFIR RV dpa rates. Since its inception, HFIR has been operated with highly enriched uranium (HEU) cores. As part of an effort sponsored by the National Nuclear Security Administration (NNSA), conversion to LEU cores is being considered for future HFIR operations. The HFIR LEU configurations analyzed are consistent with the LEU core models used by Ilas and Primm and the HEU balance-of-plant models used by Risner and Blakeman in the latest analyses performed to support the HFIR materials surveillance program. The Risner and Blakeman analyses, as well as the studies documented herein, are the first to apply the hybrid transport methods available in the Automated Variance reduction Generator (ADVANTG) code to HFIR RV dpa rate calculations. These calculations have been performed on the Oak Ridge National Laboratory (ORNL) Institutional Cluster (OIC) with version 1.60 of the Monte Carlo N-Particle 5 (MCNP5) computer code.
Shear wall ultimate drift limits
Duffey, T.A.; Goldman, A.; Farrar, C.R.
1994-04-01
Drift limits for reinforced-concrete shear walls are investigated by reviewing the open literature for appropriate experimental data. Drift values at ultimate are determined for walls with aspect ratios ranging up to a maximum of 3.53 and undergoing different types of lateral loading (cyclic static, monotonic static, and dynamic). Based on the geometry of actual nuclear power plant structures exclusive of containments and concerns regarding their response during seismic (i.e.,cyclic) loading, data are obtained from pertinent references for which the wall aspect ratio is less than or equal to approximately 1, and for which testing is cyclic in nature (typically displacement controlled). In particular, lateral deflections at ultimate load, and at points in the softening region beyond ultimate for which the load has dropped to 90, 80, 70, 60, and 50 percent of its ultimate value, are obtained and converted to drift information. The statistical nature of the data is also investigated. These data are shown to be lognormally distributed, and an analysis of variance is performed. The use of statistics to estimate Probability of Failure for a shear wall structure is illustrated.
2D stochastic-integral models for characterizing random grain noise in titanium alloys
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.
2014-02-18
We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Love (K-L) expansion for the random Euler angles, ? and ?, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.
A Two-Stage Kalman Filter Approach for Robust and Real-Time Power System State Estimation
Zhang, Jinghe; Welch, Greg; Bishop, Gary; Huang, Zhenyu
2014-04-01
As electricity demand continues to grow and renewable energy increases its penetration in the power grid, realtime state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by Phasor Measurement Units (PMU). In this paper we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However in practice the process and measurement noise models are usually difficult to obtain. Thus we have developed the Adaptive Kalman Filter with Inflatable Noise Variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. Simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements.
Daily diaries of respiratory symptoms and air pollution: Methodological issues and results
Schwartz, J. ); Wypij, D.; Dockery D.; Ware, J.; Spengler, J.; Ferris, B. Jr. ); Zeger, S. )
1991-01-01
Daily diaries of respiratory symptoms are a powerful technique for detecting acute effects of air pollution exposure. While conceptually simple, these diary studies can be difficult to analyze. The daily symptom rates are highly correlated, even after adjustment for covariates, and this lack of independence must be considered in the analysis. Possible approaches include the use of incidence instead of prevalence rates and autoregressive models. Heterogeneity among subjects also induces dependencies in the data. These can be addressed by stratification and by two-stage models such as those developed by Korn and Whittemore. These approaches have been applied to two data sets: a cohort of school children participating in the Harvard Six Cities Study and a cohort of student nurses in Los Angeles. Both data sets provide evidence of autocorrelation and heterogeneity. Controlling for autocorrelation corrects the precision estimates, and because diary data are usually positively autocorrelated, this leads to larger variance estimates. Controlling for heterogeneity among subjects appears to increase the effect sizes for air pollution exposure. Preliminary results indicate associations between sulfur dioxide and cough incidence in children and between nitrogen dioxide and phlegm incidence in student nurses.
Chou, Wen-Chi; Ma, Qin; Yang, Shihui; Cao, Sha; Klingeman, Dawn M.; Brown, Steven D.; Xu, Ying
2015-03-12
The identification of transcription units (TUs) encoded in a bacterial genome is essential to elucidation of transcriptional regulation of the organism. To gain a detailed understanding of the dynamically composed TU structures, we have used four strand-specific RNA-seq (ssRNA-seq) datasets collected under two experimental conditions to derive the genomic TU organization of Clostridium thermocellum using a machine-learning approach. Our method accurately predicted the genomic boundaries of individual TUs based on two sets of parameters measuring the RNA-seq expression patterns across the genome: expression-level continuity and variance. A total of 2590 distinct TUs are predicted based on the four RNA-seq datasets. Moreover, among the predicted TUs, 44% have multiple genes. We assessed our prediction method on an independent set of RNA-seq data with longer reads. The evaluation confirmed the high quality of the predicted TUs. Functional enrichment analyses on a selected subset of the predicted TUs revealed interesting biology. To demonstrate the generality of the prediction method, we have also applied the method to RNA-seq data collected on Escherichia coli and achieved high prediction accuracies. The TU prediction program named SeqTU is publicly available athttps://code.google.com/p/seqtu/. We expect that the predicted TUs can serve as the baseline information for studying transcriptional and post-transcriptional regulation in C. thermocellum and other bacteria.
Kinetics of heavy oil/coal coprocessing
Szladow, A.J.; Chan, R.K.; Fouda, S.; Kelly, J.F. )
1988-01-01
A number of studies have been reported on coprocessing of coal and oil sand bitumen, petroleum residues and distillate fractions in catalytic and non-catalytic processes. The studies described the effects of feedstock characteristics, process chemistry and operating variables on the product yield and distribution; however, very few kinetic data were reported in these investigations. This paper presents the kinetic data and modeling of the CANMET coal/heavy oil coprocessing process. A number of reaction networks were evaluated for CANMET coprocessing. The final choice of model was a parallel model with some sequential characteristics. The model explained 90.0 percent of the total variance, which was considered satisfactory in view of the difficulties of modeling preasphaltenes. The models which were evaluated showed that the kinetic approach successfully applied to coal liquefaction and heavy oil upgrading can be also applied to coprocessing. The coal conversion networks and heavy oil upgrading networks are interrelated via the forward reaction paths of preasphaltenes, asphaltenes, and THFI and via the reverse kinetic paths of an adduct formation between preasphaltenes and heavy oil.
Extragalactic foreground contamination in temperature-based CMB lens reconstruction
Osborne, Stephen J.; Hanson, Duncan; Dor, Olivier E-mail: dhanson@physics.mcgill.ca
2014-03-01
We discuss the effect of unresolved point source contamination on estimates of the CMB lensing potential, from components such as the thermal Sunyaev-Zel'dovich effect, radio point sources, and the Cosmic Infrared Background. We classify the possible trispectra associated with such source populations, and construct estimators for the amplitude and scale-dependence of several of the major trispectra. We show how to propagate analytical models for these source trispectra to biases for lensing. We also construct a ''source-hardened'' lensing estimator which experiences significantly smaller biases when exposed to unresolved point sources than the standard quadratic lensing estimator. We demonstrate these ideas in practice using the sky simulations of Sehgal et al., for cosmic-variance limited experiments designed to mimic ACT, SPT, and Planck. We find that for radio sources and SZ the bias is significantly reduced, but for CIB it is essentially unchanged. However, by using the high-frequency, all-sky CIB measurements from Planck and Herschel it may be possible to suppress this contribution.
Statistical Analysis of Variation in the Human Plasma Proteome
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Corzett, Todd H.; Fodor, Imola K.; Choi, Megan W.; Walsworth, Vicki L.; Turteltaub, Kenneth W.; McCutchen-Maloney, Sandra L.; Chromy, Brett A.
2010-01-01
Quantifying the variation in the human plasma proteome is an essential prerequisite for disease-specific biomarker detection. We report here on the longitudinal and individual variation in human plasma characterized by two-dimensional difference gel electrophoresis (2-D DIGE) using plasma samples from eleven healthy subjects collected three times over a two week period. Fixed-effects modeling was used to remove dye and gel variability. Mixed-effects modeling was then used to quantitate the sources of proteomic variation. The subject-to-subject variation represented the largest variance component, while the time-within-subject variation was comparable to the experimental variation found in a previous technical variability study where onemore » human plasma sample was processed eight times in parallel and each was then analyzed by 2-D DIGE in triplicate. Here, 21 protein spots had larger than 50% CV, suggesting that these proteins may not be appropriate as biomarkers and should be carefully scrutinized in future studies. Seventy-eight protein spots showing differential protein levels between different individuals or individual collections were identified by mass spectrometry and further characterized using hierarchical clustering. The results present a first step toward understanding the complexity of longitudinal and individual variation in the human plasma proteome, and provide a baseline for improved biomarker discovery.« less
David Muth, Jr.; Jared Abodeely; Richard Nelson; Douglas McCorkle; Joshua Koch; Kenneth Bryden
2011-08-01
Agricultural residues have significant potential as a feedstock for bioenergy production, but removing these residues can have negative impacts on soil health. Models and datasets that can support decisions about sustainable agricultural residue removal are available; however, no tools currently exist capable of simultaneously addressing all environmental factors that can limit availability of residue. The VE-Suite model integration framework has been used to couple a set of environmental process models to support agricultural residue removal decisions. The RUSLE2, WEPS, and Soil Conditioning Index models have been integrated. A disparate set of databases providing the soils, climate, and management practice data required to run these models have also been integrated. The integrated system has been demonstrated for two example cases. First, an assessment using high spatial fidelity crop yield data has been run for a single farm. This analysis shows the significant variance in sustainably accessible residue across a single farm and crop year. A second example is an aggregate assessment of agricultural residues available in the state of Iowa. This implementation of the integrated systems model demonstrates the capability to run a vast range of scenarios required to represent a large geographic region.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels
Monras, Alex; Illuminati, Fabrizio
2011-01-15
We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form of compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.
Out-of-plane ultrasonic velocity measurement
Hall, M.S.; Brodeur, P.H.; Jackson, T.G.
1998-07-14
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.
Out-of-plane ultrasonic velocity measurement
Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.
1998-01-01
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.
MM-Estimator and Adjusted Super Smoother based Simultaneous Prediction Confedenc
Energy Science and Technology Software Center (OSTI)
2002-07-19
A Novel Application of Regression Analysis (MM-Estimator) with Simultaneous Prediction Confidence Intervals are proposed to detect up- or down-regulated genes, which are outliers in scatter plots based on log-transformed red (Cy5 fluorescent dye) versus green (Cy3 fluorescent Dye) intensities. Advantages of the application: 1) Robust and Resistant MM-Estimator is a Reliable Method to Build Linear Regression In the presence of Outliers, 2) Exploratory Data Analysis Tools (Boxplots, Averaged Shifted Histograms, Quantile-Quantile Normal Plots and Scattermore » Plots) are Unsed to Test Visually underlying assumptions of linearity and Contaminated Normality in Microarray data), 3) Simultaneous prediction confidence intervals (SPCIs) Guarantee a desired confidence level across the whole range of the data points used for the scatter plots. Results of the outlier detection procedure is a set of significantly differentially expressed genes extracted from the employed microarray data set. A scatter plot smoother (super smoother or locally weighted regression) is used to quantify heteroscendasticity is residual variance (Commonly takes place in lower and higher intensity areas). The set of differentially expressed genes is quantified using interval estimates for P-values as a probabilistic measure of being outlier by chance. Monte Carlo simultations are used to adjust super smoother-based SPCIs.her.« less
Report on the Behavior of Fission Products in the Co-decontamination Process
Martin, Leigh Robert; Riddle, Catherine Lynn
2015-09-30
This document was prepared to meet FCT level 3 milestone M3FT-15IN0302042, “Generate Zr, Ru, Mo and Tc data for the Co-decontamination Process.” This work was carried out under the auspices of the Lab-Scale Testing of Reference Processes FCT work package. This document reports preliminary work in identifying the behavior of important fission products in a Co-decontamination flowsheet. Current results show that Tc, in the presence of Zr alone, does not behave as the Argonne Model for Universal Solvent Extraction (AMUSE) code would predict. The Tc distribution is reproducibly lower than predicted, with Zr distributions remaining close to the AMUSE code prediction. In addition, it appears there may be an intricate relationship between multiple fission product metals, in different combinations, that will have a direct impact on U, Tc and other important fission products such as Zr, Mo, and Rh. More extensive testing is required to adequately predict flowsheet behavior for these variances within the fission products.
Spin and orbital ordering in Y1-xLaxVO₃
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Yan, J.-Q.; Zhou, J.-S.; Cheng, J. G.; Goodenough, J. B.; Ren, Y.; Llobet, A.; McQueeney, R. J.
2011-12-02
The spin and orbital ordering in Y1-xLaxVO₃ (0.30 ≤ x ≤ 1.0) has been studied to map out the phase diagram over the whole doping range 0 ≤ x ≤ 1. The phase diagram is compared with that for RVO₃ (R = rare earth or Y) perovskites without A-site variance. For x > 0.20, no long-range orbital ordering was observed above the magnetic ordering temperature TN; the magnetic order is accompanied by a lattice anomaly at a Tt ≤ TN as in LaVO₃. The magnetic ordering below Tt ≤ TN is G type in the compositional range 0.20 ≤ xmore » ≤ 0.40 and C type in the range 0.738 ≤ x ≤ 1.0. Magnetization and neutron powder diffraction measurements point to the coexistence below TN of the two magnetic phases in the compositional range 0.4 < x < 0.738. Samples in the compositional range 0.20 < x ≤ 1.0 are characterized by an additional suppression of a glasslike thermal conductivity in the temperature interval TN < T < T* and a change in the slope of 1/χ(T). We argue that T* represents a temperature below which spin and orbital fluctuations couple together via λL∙S.« less
DAKOTA Design Analysis Kit for Optimization and Terascale
Energy Science and Technology Software Center (OSTI)
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file andmore » launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Features in the primordial power spectrum? A frequentist analysis
Hamann, Jan; Shafieloo, Arman; Souradeep, Tarun E-mail: a.shafieloo1@physics.ox.ac.uk
2010-04-01
Features in the primordial power spectrum have been suggested as an explanation for glitches in the angular power spectrum of temperature anisotropies measured by the WMAP satellite. However, these glitches might just as well be artifacts of noise or cosmic variance. Using the effective Δχ{sup 2} between the best-fit power-law spectrum and a deconvolved primordial spectrum as a measure of ''featureness'' of the data, we perform a full Monte-Carlo analysis to address the question of how significant the recovered features are. We find that in 26% of the simulated data sets the reconstructed spectrum yields a greater improvement in the likelihood than for the actually observed data. While features cannot be categorically ruled out by this analysis, and the possibility remains that simple theoretical models which predict some of the observed features might stand up to rigorous statistical testing, our results suggest that WMAP data are consistent with the assumption of a featureless power-law primordial spectrum.
Dynamics of dispersive photon-number QND measurements in a micromaser
Kozlovskii, A. V. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)], E-mail: kozlovsk@sci.lebedev.ru
2007-04-15
A numerical analysis of dispersive quantum nondemolition measurement of the photon number of a microwave cavity field is presented. Simulations show that a key property of the dispersive atom-field interaction used in Ramsey interferometry is the extremely high sensitivity of the dynamics of atomic and field states to basic parameters of the system. When a monokinetic atomic beam is sent through a microwave cavity, a qualitative change in the field state can be caused by an uncontrollably small deviation of parameters (such as atom path length through the cavity, atom velocity, cavity mode frequency detuning, or atom-field coupling constants). The resulting cavity field can be either in a Fock state or in a super-Poissonian state (characterized by a large photon-number variance). When the atoms have a random velocity spread, the field is squeezed to a Fock state for arbitrary values of the system's parameters. However, this makes detection of Ramsey fringes impossible, because the probability of detecting an atom in the upper or lower electronic state becomes a random quantity almost uniformly distributed over the interval between zero and unity, irrespective of the cavity photon number.
Water Velocity Measurements on a Vertical Barrier Screen at the Bonneville Dam Second Powerhouse
Hughes, James S.; Deng, Zhiqun; Weiland, Mark A.; Martinez, Jayson J.; Yuan, Yong
2011-11-22
Fish screens at hydroelectric dams help to protect rearing and migrating fish by preventing them from passing through the turbines and directing them towards the bypass channels by providing a sweeping flow parallel to the screen. However, fish screens may actually be harmful to fish if they become impinged on the surface of the screen or become disoriented due to poor flow conditions near the screen. Recent modifications to the vertical barrier screens (VBS) at the Bonneville Dam second powerhouse (B2) intended to increase the guidance of juvenile salmonids into the juvenile bypass system (JBS) have resulted in high mortality and descaling rates of hatchery subyearling Chinook salmon during the 2008 juvenile salmonid passage season. To investigate the potential cause of the high mortality and descaling rates, an in situ water velocity measurement study was conducted using acoustic Doppler velocimeters (ADV) in the gatewell slot at Units 12A and 14A of B2. From the measurements collected the average approach velocity, sweep velocity, and the root mean square (RMS) value of the velocity fluctuations were calculated. The approach velocities measured across the face of the VBS varied but were mostly less than 0.3 m/s. The sweep velocities also showed large variances across the face of the VBS with most measurements being less than 1.5 m/s. This study revealed that the approach velocities exceeded criteria recommended by NOAA Fisheries and Washington State Department of Fish and Wildlife intended to improve fish passage conditions.
Wind Measurements from Arc Scans with Doppler Wind Lidar
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wang, H.; Barthelmie, R. J.; Clifton, Andy; Pryor, S. C.
2015-11-25
When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of itsmore » high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.« less
Not Available
1980-05-01
This study is an effort to determine legal and technical constraints on the introduction of single entry longwall systems to US coal mining. US mandatory standards governing underground mining are compared and contrasted with regulations of certain foreign countries, mainly continental Europe, relating to the employment of longwall mining. Particular attention is paid to the planning and development of entries, the mining of longwall panels and consequent retrieval operations. Sequential mining of adjacent longwall panels is considered. Particular legal requirements, which constrain or prohibit single entry longwall mining in the US, are identified, and certain variances or exemptions from the regulations are described. The costs of single entry systems and of currently employed multiple entry systems are compared. Under prevailing US conditions multiple entry longwall is preferable because of safety, marginal economic benefit and compliance with US laws and regulations. However, where physical conditions become hazardous for the multiple entry method, for instance, in greater depth or in rockburst prone ground, mandatory standards, which now constrain or prohibit single entry workings, are of doubtful benefit. European methods would then provide single entry operation with improved strata control.
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
Jordan, Andrew N.; Ooi, C. H. Raymond; Svidzinsky, Anatoly A.
2006-09-15
The atom fluctuation statistics of an ideal, mesoscopic, Bose-Einstein condensate are investigated from several different perspectives. By generalizing the grand canonical analysis (applied to the canonical ensemble problem), we obtain a self-consistent equation for the mean condensate particle number that coincides with the microscopic result calculated from the laser master equation approach. For the case of a harmonic trap, we obtain an analytic expression for the condensate particle number that is very accurate at all temperatures, when compared with numerical canonical ensemble results. Applying a similar generalized grand canonical treatment to the variance, we obtain an accurate result only below the critical temperature. Analytic results are found for all higher moments of the fluctuation distribution by employing the stochastic path integral formalism, with excellent accuracy. We further discuss a hybrid treatment, which combines the master equation and stochastic path integral analysis with results obtained based on the canonical ensemble quasiparticle formalism [Kocharovsky et al., Phys. Rev. A 61, 053606 (2000)], producing essentially perfect agreement with numerical simulation at all temperatures.
Nonstationary stochastic charge fluctuations of a dust particle in plasmas
Shotorban, B.
2011-06-15
Stochastic charge fluctuations of a dust particle that are due to discreteness of electrons and ions in plasmas can be described by a one-step process master equation [T. Matsoukas and M. Russell, J. Appl. Phys. 77, 4285 (1995)] with no exact solution. In the present work, using the system size expansion method of Van Kampen along with the linear noise approximation, a Fokker-Planck equation with an exact Gaussian solution is developed by expanding the master equation. The Gaussian solution has time-dependent mean and variance governed by two ordinary differential equations modeling the nonstationary process of dust particle charging. The model is tested via the comparison of its results to the results obtained by solving the master equation numerically. The electron and ion currents are calculated through the orbital motion limited theory. At various times of the nonstationary process of charging, the model results are in a very good agreement with the master equation results. The deviation is more significant when the standard deviation of the charge is comparable to the mean charge in magnitude.
Fabrication and Analysis of 150-mm-Aperture Nb3Sn MQXF Coils
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Holik, E. F.; Ambrosio, G.; Anerella, M.; Bossert, R.; Cavanna, E.; Cheng, D.; Dietderich, D. R.; Ferracin, P.; Ghosh, A. K.; Bermudez, S. Izquierdo; et al
2016-01-12
The U.S. LHC Accelerator Research Program (LARP) and CERN are combining efforts for the HiLumi-LHC upgrade to design and fabricate 150-mm-aperture, interaction region quadrupoles with a nominal gradient of 130 T/m using Nb3Sn. To successfully produce the necessary long MQXF triplets, the HiLumi-LHC collaboration is systematically reducing risk and design modification by heavily relying upon the experience gained from the successful 120-mm-aperture LARP HQ program. First generation MQXF short (MQXFS) coils were predominately a scaling up of the HQ quadrupole design allowing comparable cable expansion during Nb3Sn formation heat treatment and increased insulation fraction for electrical robustness. A total ofmore » 13 first generation MQXFS coils were fabricated between LARP and CERN. Systematic differences in coil size, coil alignment symmetry, and coil length contraction during heat treatment are observed and likely due to slight variances in tooling and insulation/cable systems. Analysis of coil cross sections indicate that field-shaping wedges and adjacent coil turns are systematically displaced from the nominal location and the cable is expanding less than nominally designed. Lastly, a second generation MQXF coil design seeks to correct the expansion and displacement discrepancies by increasing insulation and adding adjustable shims at the coil pole and midplanes to correct allowed magnetic field harmonics.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
Mishra, Srikanta; Schuetter, Jared
2014-11-01
We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.
Evaluation of bulk paint worker exposure to solvents at household hazardous waste collection events
Cameron, M.
1995-09-01
In fiscal year 93/94, over 250 governmental agencies were involved in the collection of household hazardous wastes in the State of California. During that time, over 3,237,000 lbs. of oil based paint were collected in 9,640 drums. Most of this was in lab pack drums, which can only hold up to 20 one gallon cans. Cost for disposal of such drums is approximately $1000. In contrast, during the same year, 1,228,000 lbs. of flammable liquid were collected in 2,098 drums in bulk form. Incineration of bulked flammable liquids is approximately $135 per drum. Clearly, it is most cost effective to bulk flammable liquids at household hazardous waste events. Currently, this is the procedure used at most Temporary Household Hazardous Waste Collection Facilities (THHWCFs). THHWCFs are regulated by the Department of Toxic Substances Control (DTSC) under the new Permit-by Rule Regulations. These regulations specify certain requirements regarding traffic flow, emergency response notifications and prevention of exposure to the public. The regulations require that THHWCF operators bulk wastes only when the public is not present. [22 CCR, section 67450.4 (e) (2) (A)].Santa Clara County Environmental Health Department sponsors local THHWCF`s and does it`s own bulking. In order to save time and money, a variance from the regulation was requested and an employee monitoring program was initiated to determine actual exposure to workers. Results are presented.
Spoil handling and reclamation costs at a contour surface mine in steep slope Appalachian topography
Zipper, C.E.; Hall, A.T.; Daniels, W.L.
1985-12-09
Accurate overburden handling cost estimation methods are essential to effective pre-mining planning for post-mining landforms and land uses. With the aim of developing such methods, the authors have been monitoring costs at a contour surface mine in Wise County, Virginia since January 1, 1984. Early in the monitoring period, the land was being returned to its Approximate Original Contour (AOC) in a manner common to the Appalachian region since implementation of the Surface Mining Control and Reclamation Act of 1977 (SMCRA). More recently, mining has been conducted under an experimental variance from the AOC provisions of SMCRA which allowed a near-level bench to be constructed across the upper surface of two mined points and an intervening filled hollow. All mining operations are being recorded by location. The cost of spoil movement is calculated for each block of coal mined between January 1, 1984, and August 1, 1985. Per cubic yard spoil handling and reclamation costs are compared by mining block. The average cost of spoil handling was $1.90 per bank cubic yard; however, these costs varied widely between blocks. The reasons for those variations included the landscape positions of the mining blocks and spoil handling practices. The average reclamation cost was $0.08 per bank cubic yard of spoil placed in the near level bench on the mined point to $0.20 for spoil placed in the hollow fill. 2 references, 4 figures.
Dwivedi, Gopal; Viswanathan, Vaishak; Sampath, Sanjay; Shyam, Amit; Lara-Curzio, Edgar
2014-06-09
Fracture toughness has become one of the dominant design parameters that dictates the selection of materials and their microstructure to obtain durable thermal barrier coatings (TBCs). Much progress has been made in characterizing the fracture toughness of relevant TBC compositions in bulk form, and it has become apparent that this property is significantly affected by process-induced microstructural defects. In this investigation, a systematic study of the influence of coating microstructure on the fracture toughness of atmospheric plasma sprayed (APS) TBCs has been carried out. Yttria partially stabilized zirconia (YSZ) coatings were fabricated under different spray process conditions inducing different levels of porosity and interfacial defects. Fracture toughness was measured on free standing coatings in as-processed and thermally aged conditions using the double torsion technique. Results indicate significant variance in fracture toughness among coatings with different microstructures including changes induced by thermal aging. Comparative studies were also conducted on an alternative TBC composition, Gd_{2}Zr_{2}O_{7} (GDZ), which as anticipated shows significantly lower fracture toughness compared to YSZ. Furthermore, the results from these studies not only point towards a need for process and microstructure optimization for enhanced TBC performance but also a framework for establishing performance metrics for promising new TBC compositions.
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael; Stemler, Thomas
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rssler system and find that periodic dynamics translate to ring structures whereas chaotic time series translate to band or tube-like structuresthereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.
RELAXATION OF WARPED DISKS: THE CASE OF PURE HYDRODYNAMICS
Sorathia, Kareem A.; Krolik, Julian H.; Hawley, John F.
2013-05-10
Orbiting disks may exhibit bends due to a misalignment between the angular momentum of the inner and outer regions of the disk. We begin a systematic simulational inquiry into the physics of warped disks with the simplest case: the relaxation of an unforced warp under pure fluid dynamics, i.e., with no internal stresses other than Reynolds stress. We focus on the nonlinear regime in which the bend rate is large compared to the disk aspect ratio. When warps are nonlinear, strong radial pressure gradients drive transonic radial motions along the disk's top and bottom surfaces that efficiently mix angular momentum. The resulting nonlinear decay rate of the warp increases with the warp rate and the warp width, but, at least in the parameter regime studied here, is independent of the sound speed. The characteristic magnitude of the associated angular momentum fluxes likewise increases with both the local warp rate and the radial range over which the warp extends; it also increases with increasing sound speed, but more slowly than linearly. The angular momentum fluxes respond to the warp rate after a delay that scales with the square root of the time for sound waves to cross the radial extent of the warp. These behaviors are at variance with a number of the assumptions commonly used in analytic models to describe linear warp dynamics.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Kneitel, Terri; Rocco, Diane
2012-07-01
When conducting environmental cleanup or decommissioning projects, characterization of the material to be removed is often performed when the material is in-situ. The actual demolition or excavation and removal of the material can result in individual containers that vary significantly from the original bulk characterization profile. This variance, if not detected, can result in individual containers exceeding Department of Transportation regulations or waste disposal site acceptance criteria. Bulk waste characterization processes were performed to initially characterize the Brookhaven Graphite Research Reactor (BGRR) graphite pile and this information was utilized to characterize all of the containers of graphite. When the last waste container was generated containing graphite dust from the bottom of the pile, but no solid graphite blocks, the material contents were significantly different in composition from the bulk waste characterization. This error resulted in exceedance of the disposal site waste acceptance criteria. Brookhaven Science Associates initiated an in-depth investigation to identify the root causes of this failure and to develop appropriate corrective actions. The lessons learned at BNL have applicability to other cleanup and demolition projects which characterize their wastes in bulk or in-situ and then extend that characterization to individual containers. (authors)
Measuring kinetic energy changes in the mesoscale with low acquisition rates
Roldn, .; Martnez, I. A.; Rica, R. A.; Dinis, L.
2014-06-09
We report on the measurement of the average kinetic energy changes in isothermal and non-isothermal quasistatic processes in the mesoscale, realized with a Brownian particle trapped with optical tweezers. Our estimation of the kinetic energy change allows to access to the full energetic description of the Brownian particle. Kinetic energy estimates are obtained from measurements of the mean square velocity of the trapped bead sampled at frequencies several orders of magnitude smaller than the momentum relaxation frequency. The velocity is tuned applying a noisy electric field that modulates the amplitude of the fluctuations of the position and velocity of the Brownian particle, whose motion is equivalent to that of a particle in a higher temperature reservoir. Additionally, we show that the dependence of the variance of the time-averaged velocity on the sampling frequency can be used to quantify properties of the electrophoretic mobility of a charged colloid. Our method could be applied to detect temperature gradients in inhomogeneous media and to characterize the complete thermodynamics of biological motors and of artificial micro and nanoscopic heat engines.
Wind Measurements from Arc Scans with Doppler Wind Lidar
Wang, H.; Barthelmie, R. J.; Clifton, Andy; Pryor, S. C.
2015-11-25
When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of its high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.
Perpinan, O.; Lorenzo, E.
2011-01-15
The irradiance fluctuations and the subsequent variability of the power output of a PV system are analysed with some mathematical tools based on the wavelet transform. It can be shown that the irradiance and power time series are nonstationary process whose behaviour resembles that of a long memory process. Besides, the long memory spectral exponent {alpha} is a useful indicator of the fluctuation level of a irradiance time series. On the other side, a time series of global irradiance on the horizontal plane can be simulated by means of the wavestrapping technique on the clearness index and the fluctuation behaviour of this simulated time series correctly resembles the original series. Moreover, a time series of global irradiance on the inclined plane can be simulated with the wavestrapping procedure applied over a signal previously detrended by a partial reconstruction with a wavelet multiresolution analysis, and, once again, the fluctuation behaviour of this simulated time series is correct. This procedure is a suitable tool for the simulation of irradiance incident over a group of distant PV plants. Finally, a wavelet variance analysis and the long memory spectral exponent show that a PV plant behaves as a low-pass filter. (author)
LENSING NOISE IN MILLIMETER-WAVE GALAXY CLUSTER SURVEYS
Hezaveh, Yashar; Vanderlinde, Keith; Holder, Gilbert; De Haan, Tijmen
2013-08-01
We study the effects of gravitational lensing by galaxy clusters of the background of dusty star-forming galaxies (DSFGs) and the cosmic microwave background (CMB), and examine the implications for Sunyaev-Zel'dovich-based (SZ) galaxy cluster surveys. At the locations of galaxy clusters, gravitational lensing modifies the probability distribution of the background flux of the DSFGs as well as the CMB. We find that, in the case of a single-frequency 150 GHz survey, lensing of DSFGs leads both to a slight increase ({approx}10%) in detected cluster number counts (due to a {approx}50% increase in the variance of the DSFG background, and hence an increased Eddington bias) and a rare (occurring in {approx}2% of clusters) 'filling-in' of SZ cluster signals by bright strongly lensed background sources. Lensing of the CMB leads to a {approx}55% reduction in CMB power at the location of massive galaxy clusters in a spatially matched single-frequency filter, leading to a net decrease in detected cluster number counts. We find that the increase in DSFG power and decrease in CMB power due to lensing at cluster locations largely cancel, such that the net effect on cluster number counts for current SZ surveys is subdominant to Poisson errors.
O'Connor, M; Sansourekidou, P
2014-06-01
Purpose: To evaluate how changes in imaging policy affect the magnitude of shifts applied to patients. Methods: In June 2012, the department's imaging policy was altered to require that any shifts derived from imaging throughout the course of treatment shall be considered systematic only after they were validated with two data points that are consistent in the same direction. Multiple additions and clarifications to the imaging policy were implemented throughout the course of the data collection, but they were mostly of administrative nature. Entered shifts were documented in MOSAIQ (Elekta AB) through the localization offset. The MOSAIQ database was queried to identify a possible trend. A total of 25,670 entries were analyzed, including four linear accelerators with a combination of MV planar, kV planar and kV three dimensional imaging. The monthly average of the magnitude of the vector was used. Plan relative offsets were excluded. During the evaluated period of time, one of the satellite facilities acquired and implemented Vision RT (AlignRT Inc). Results: After the new policy was implemented the shifts variance and standard deviation decreased. The decrease is linear with time elapsed. Vision RT implementation at one satellite facility reduced the number of overall shifts, specifically for breast patients. Conclusion: Changes in imaging policy have a significant effect on the magnitude of shifts applied to patients. Using two statistical points before applying a shift as persistent decreased the overall magnitude of the shifts applied to patients.
Parameters affecting resin-anchored cable bolt performance: Results of in situ evaluations
Zelanko, J.C.; Mucho, T.P.; Compton, C.S.; Long, L.E.; Bailey, P.E.
1995-11-01
Cable bolt support techniques, including hardware and anchorage systems, continue to evolve to meet US mining requirements. For cable support systems to be successfully implemented into new ground control areas, the mechanics of this support and the potential range of performance need to be better understood. To contribute to this understanding, a series of 36 pull tests were performed on 10 ft long cable bolts using various combinations of hole diameters, resin formulations, anchor types, and with and without resin dams. These test provided insight as to the influence of these four parameters on cable system performance. Performance was assessed in terms of support capacity (maximum load attained in a pull test), system stiffness (assessed from two intervals of load-deformation), and from the general load-deformation response. Three characteristic load-deformation responses were observed. An Analysis of Variance identified a number of main effects and interactions of significance to support capacity and stiffness. The factorial experiment performed in this study provides insight to the effects of several design parameters associated with resin-anchored cable bolts.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posterior is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
Species interactions differ in their genetic robustness
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Chubiz, Lon M.; Granger, Brian R.; Segre, Daniel; Harcombe, William R.
2015-04-14
Conflict and cooperation between bacterial species drive the composition and function of microbial communities. Stability of these emergent properties will be influenced by the degree to which species' interactions are robust to genetic perturbations. We use genome-scale metabolic modeling to computationally analyze the impact of genetic changes when Escherichia coli and Salmonella enterica compete, or cooperate. We systematically knocked out in silico each reaction in the metabolic network of E. coli to construct all 2583 mutant stoichiometric models. Then, using a recently developed multi-scale computational framework, we simulated the growth of each mutant E. coli in the presence of S.more » enterica. The type of interaction between species was set by modulating the initial metabolites present in the environment. We found that the community was most robust to genetic perturbations when the organisms were cooperating. Species ratios were more stable in the cooperative community, and community biomass had equal variance in the two contexts. Additionally, the number of mutations that have a substantial effect is lower when the species cooperate than when they are competing. In contrast, when mutations were added to the S. enterica network the system was more robust when the bacteria were competing. These results highlight the utility of connecting metabolic mechanisms and studies of ecological stability. Cooperation and conflict alter the connection between genetic changes and properties that emerge at higher levels of biological organization.« less
Sisterson, D. L.
2011-02-01
Individual raw datastreams from instrumentation at the Atmospheric Radiation Measurement (ARM) Climate Research Facility fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ARM Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of processed data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual datastream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY2010 for the Southern Great Plains (SGP) site is 2097.60 hours (0.95 x 2208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1987.20 hours (0.90 x 2208) and for the Tropical Western Pacific (TWP) locale is 1876.80 hours (0.85 x 2208). The first ARM Mobile Facility (AMF1) deployment in Graciosa Island, the Azores, Portugal, continued through this quarter, so the OPSMAX time this quarter is 2097.60 hours (0.95 x 2208). The second ARM Mobile Facility (AMF2) began deployment this quarter to Steamboat Springs, Colorado. The experiment officially began November 15, but most of the instruments were up and running by November 1. Therefore, the OPSMAX time for the AMF2 was 1390.80 hours (.95 x 1464 hours) for November and December (61 days). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or datastream. Data availability reported here refers to the average of the individual, continuous datastreams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Summary. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2010, for the fixed sites. Because the AMFs operate episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. This first quarter comprises a total of 2,208 possible hours for the fixed sites and the AMF1 and 1,464 possible hours for the AMF2. The average of the fixed sites exceeded our goal this quarter. The AMF1 has essentially completed its mission and is shutting down to pack up for its next deployment to India. Although all the raw data from the operational instruments are in the Archive for the AMF2, only the processed data are tabulated. Approximately half of the AMF2 instruments have data that was fully processed, resulting in the 46% of all possible data made available to users through the Archive for this first quarter. Typically, raw data is not made available to users unless specifically requested.
Sisterson, D. L.
2009-10-15
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the fourth quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 ? 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 ? 2,208) and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 ? 2,208). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive result from downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period July 1 - September 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The fourth quarter comprises a total of 2,208 hours for the fixed and mobile sites. The average of the fixed sites well exceeded our goal this quarter. The AMF data statistic requires explanation. Since the AMF radar data ingest software is being modified, the data are being stored in the DMF for data processing. Hence, the data are not at the Archive; they are anticipated to become available by the next report.
Sisterson, D. L.
2007-07-26
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter of FY 2007 for the Southern Great Plains (SGP) site is 2,074.8 hours (0.95 x 2,184 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,965.6 hours (0.90 x 2,184), and that for the Tropical Western Pacific (TWP) locale is 1,856.4 hours (0.85 x 2,184). The OPSMAX time for the ARM Mobile Facility (AMF) is 2,074.8 hours (0.95 x 2,184). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period April 1 through June 30, 2007, for the fixed sites only. The AMF has been deployed to Germany and is operational this quarter. The third quarter comprises a total of 2,184 hours. Although the average exceeded our goal this quarter, there were cash flow issues resulting from Continuing Resolution early in the period that did not allow for timely instrument repairs that kept our statistics lower than past quarters at all sites. The low NSA numbers resulted from missing MFRSR data this spring that appears to be recoverable but not available at the Archive at the time of this report.
Sisterson, D. L.
2009-07-14
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,074.80 hours (0.95 x 2,184 hours this quarter); for the North Slope Alaska (NSA) locale it is 1,965.60 hours (0.90 x 2,184); and for the Tropical Western Pacific (TWP) locale it is 1,856.40 hours (0.85 x 2,184). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 1390.80 hours (0.95 x 1464). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for April 1 - June 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and are not included in the aggregate average with the fixed sites. The AMF statistics for this reporting period were not available at the time of this report. The third quarter comprises a total of 2,184 hours for the fixed sites. The average well exceeded our goal this quarter.
Atmospheric Radiation Measurement program climate research facility operations quarterly report.
Sisterson, D. L.; Decision and Information Sciences
2006-09-06
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year dating back to 1998. The U.S. Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1-(ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter for the Southern Great Plains (SGP) site is 2,074.80 hours (0.95 x 2,184 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,965.60 hours (0.90 x 2,184), and that for the Tropical Western Pacific (TWP) locale is 1,856.40 hours (0.85 x 2,184). The OPSMAX time for the ARM Mobile Facility (AMF) is 2,074.80 hours (0.95 x 2,184). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period April 1 through June 30, 2006, for the fixed and mobile sites. Although the AMF is currently up and running in Niamey, Niger, Africa, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The third quarter comprises a total of 2,184 hours. For all fixed sites (especially the TWP locale) and the AMF, the actual data availability (and therefore actual hours of operation) exceeded the individual (and well as aggregate average of the fixed sites) operational goal for the third quarter of fiscal year (FY) 2006.
Sisterson, D. L.
2009-04-23
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the second quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,052.00 hours (0.95 x 2,160 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,944.00 hours (0.90 x 2,160), and for the Tropical Western Pacific (TWP) locale is 1,836.00 hours (0.85 x 2,160). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because not all of the metadata have been acquired that are used to generate this metric. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 90 days for this quarter) the instruments were operating this quarter. Summary. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period January 1 - March 31, 2009, for the fixed sites. The AMF has completed its mission in China but not all of the data can be released to the public at the time of this report. The second quarter comprises a total of 2,160 hours. The average exceeded our goal this quarter.
Detection and Production of Methane Hydrate
George Hirasaki; Walter Chapman; Gerald Dickens; Colin Zelt; Brandon Dugan; Kishore Mohanty; Priyank Jaiswal
2011-12-31
This project seeks to understand regional differences in gas hydrate systems from the perspective of as an energy resource, geohazard, and long-term climate influence. Specifically, the effort will: (1) collect data and conceptual models that targets causes of gas hydrate variance, (2) construct numerical models that explain and predict regional-scale gas hydrate differences in 2-dimensions with minimal 'free parameters', (3) simulate hydrocarbon production from various gas hydrate systems to establish promising resource characteristics, (4) perturb different gas hydrate systems to assess potential impacts of hot fluids on seafloor stability and well stability, and (5) develop geophysical approaches that enable remote quantification of gas hydrate heterogeneities so that they can be characterized with minimal costly drilling. Our integrated program takes advantage of the fact that we have a close working team comprised of experts in distinct disciplines. The expected outcomes of this project are improved exploration and production technology for production of natural gas from methane hydrates and improved safety through understanding of seafloor and well bore stability in the presence of hydrates. The scope of this project was to more fully characterize, understand, and appreciate fundamental differences in the amount and distribution of gas hydrate and how this would affect the production potential of a hydrate accumulation in the marine environment. The effort combines existing information from locations in the ocean that are dominated by low permeability sediments with small amounts of high permeability sediments, one permafrost location where extensive hydrates exist in reservoir quality rocks and other locations deemed by mutual agreement of DOE and Rice to be appropriate. The initial ocean locations were Blake Ridge, Hydrate Ridge, Peru Margin and GOM. The permafrost location was Mallik. Although the ultimate goal of the project was to understand processes that control production potential of hydrates in marine settings, Mallik was included because of the extensive data collected in a producible hydrate accumulation. To date, such a location had not been studied in the oceanic environment. The project worked closely with ongoing projects (e.g. GOM JIP and offshore India) that are actively investigating potentially economic hydrate accumulations in marine settings. The overall approach was fivefold: (1) collect key data concerning hydrocarbon fluxes which is currently missing at all locations to be included in the study, (2) use this and existing data to build numerical models that can explain gas hydrate variance at all four locations, (3) simulate how natural gas could be produced from each location with different production strategies, (4) collect new sediment property data at these locations that are required for constraining fluxes, production simulations and assessing sediment stability, and (5) develop a method for remotely quantifying heterogeneities in gas hydrate and free gas distributions. While we generally restricted our efforts to the locations where key parameters can be measured or constrained, our ultimate aim was to make our efforts universally applicable to any hydrate accumulation.
Simulating a Nationally Representative Housing Sample Using EnergyPlus
Hopkins, Asa S.; Lekov, Alex; Lutz, James; Rosenquist, Gregory; Gu, Lixing
2011-03-04
This report presents a new simulation tool under development at Lawrence Berkeley National Laboratory (LBNL). This tool uses EnergyPlus to simulate each single-family home in the Residential Energy Consumption Survey (RECS), and generates a calibrated, nationally representative set of simulated homes whose energy use is statistically indistinguishable from the energy use of the single-family homes in the RECS sample. This research builds upon earlier work by Ritchard et al. for the Gas Research Institute and Huang et al. for LBNL. A representative national sample allows us to evaluate the variance in energy use between individual homes, regions, or other subsamples; using this tool, we can also evaluate how that variance affects the impacts of potential policies. The RECS contains information regarding the construction and location of each sampled home, as well as its appliances and other energy-using equipment. We combined this data with the home simulation prototypes developed by Huang et al. to simulate homes that match the RECS sample wherever possible. Where data was not available, we used distributions, calibrated using the RECS energy use data. Each home was assigned a best-fit location for the purposes of weather and some construction characteristics. RECS provides some detail on the type and age of heating, ventilation, and air-conditioning (HVAC) equipment in each home; we developed EnergyPlus models capable of reproducing the variety of technologies and efficiencies represented in the national sample. This includes electric, gas, and oil furnaces, central and window air conditioners, central heat pumps, and baseboard heaters. We also developed a model of duct system performance, based on in-home measurements, and integrated this with fan performance to capture the energy use of single- and variable-speed furnace fans, as well as the interaction of duct and fan performance with the efficiency of heating and cooling equipment. Comparison with RECS revealed that EnergyPlus did not capture the heating-side behavior of heat pumps particularly accurately, and that our simple oil furnace and boiler models needed significant recalibration to fit with RECS. Simulating the full RECS sample on a single computer would take many hours, so we used the 'cloud computing' services provided by Amazon.com to simulate dozens of homes at once. This enabled us to simulate the full RECS sample, including multiple versions of each home to evaluate the impact of marginal changes, in less than 3 hours. Once the tool was calibrated, we were able to address several policy questions. We made a simple measurement of the heat replacement effect and showed that the net effect of heat replacement on primary energy use is likely to be less than 5%, relative to appliance-only measures of energy savings. Fuel switching could be significant, however. We also evaluated the national and regional impacts of a variety of 'overnight' changes in building characteristics or occupant behavior, including lighting, home insulation and sealing, HVAC system efficiency, and thermostat settings. For example, our model shows that the combination of increased home insulation and better sealed building shells could reduce residential natural gas use by 34.5% and electricity use by 6.5%, and a 1 degree rise in summer thermostat settings could save 2.1% of home electricity use. These results vary by region, and we present results for each U.S. Census division. We conclude by offering proposals for future work to improve the tool. Some proposed future work includes: comparing the simulated energy use data with the monthly RECS bill data; better capturing the variation in behavior between households, especially as it relates to occupancy and schedules; improving the characterization of recent construction and its regional variation; and extending the general framework of this simulation tool to capture multifamily housing units, such as apartment buildings.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kleinman, Lawrence; Kuang, Chongai; Sedlacek, Arthur; Senum, Gunnar; Springston, Stephen; Wang, Jian; Zhang, Qi; Jayne, John; Fast, Jerome; Hubbe, John; et al
2016-02-15
During the Carbonaceous Aerosols and Radiative Effects Study (CARES) the US Department of Energy (DOE) G-1 aircraft was used to sample aerosol and gas phase compounds in the Sacramento, CA, plume and surrounding region. We present data from 66 plume transects obtained during 13 flights in which southwesterly winds transported the plume towards the foothills of the Sierra Nevada. Plume transport occurred partly over land with high isoprene emission rates. Our objective is to empirically determine whether organic aerosol (OA) can be attributed to anthropogenic or biogenic sources, and to determine whether there is a synergistic effect whereby OA concentrationsmore » are enhanced by the simultaneous presence of high concentrations of carbon monoxide (CO) and either isoprene, MVK+MACR (sum of methyl vinyl ketone and methacrolein), or methanol, which are taken as tracers of anthropogenic and biogenic emissions, respectively. Linear and bilinear correlations between OA, CO, and each of three biogenic tracers, “Bio”, for individual plume transects indicate that most of the variance in OA over short timescales and distance scales can be explained by CO. For each transect and species a plume perturbation, (i.e., ΔOA, defined as the difference between 90th and 10th percentiles) was defined and regressions done amongst Δ values in order to probe day-to-day and location-dependent variability. Species that predicted the largest fraction of the variance in ΔOA were ΔO3 and ΔCO. Background OA was highly correlated with background methanol and poorly correlated with other tracers. Because background OA was ~60% of peak OA in the urban plume, peak OA should be primarily biogenic and therefore non-fossil, even though the day-to-day and spatial variability of plume OA is best described by an anthropogenic tracer, CO. Transects were split into subsets according to the percentile rankings of ΔCO and ΔBio, similar to an approach used by Setyan et al. (2012) and Shilling et al. (2013) to determine if anthropogenic–biogenic (A–B) interactions enhance OA production. As found earlier, ΔOA in the data subset having high ΔCO and high ΔBio was several-fold greater than in other subsets. Part of this difference is consistent with a synergistic interaction between anthropogenic and biogenic precursors and part to an independent linear dependence of ΔOA on precursors. Here, the highest values of ΔO3, along with high temperatures, clear skies, and poor ventilation, also occurred in the high ΔCO–high ΔBio data set. A complicated mix of A–B interactions can result. After taking into account linear effects as predicted from low concentration data, an A–B enhancement of OA by a factor of 1.2 to 1.5 is estimated.« less
Inconsistent Investment and Consumption Problems
Kronborg, Morten Tolver; Steffensen, Mogens
2015-06-15
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bang strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.
Hydroacoustic Evaluation of Fish Passage Through Bonneville Dam in 2005
Ploskey, Gene R.; Weiland, Mark A.; Zimmerman, Shon A.; Hughes, James S.; Bouchard, Kyle E.; Fischer, Eric S.; Schilt, Carl R.; Hanks, Michael E.; Kim, Jina; Skalski, John R.; Hedgepeth, J.; Nagy, William T.
2006-12-04
The Portland District of the U.S. Army Corps of Engineers requested that the Pacific Northwest National Laboratory (PNNL) conduct fish-passage studies at Bonneville Dam in 2005. These studies support the Portland District's goal of maximizing fish-passage efficiency (FPE) and obtaining 95% survival for juvenile salmon passing Bonneville Dam. Major passage routes include 10 turbines and a sluiceway at Powerhouse 1 (B1), an 18-bay spillway, and eight turbines and a sluiceway at Powerhouse 2 (B2). In this report, we present results of two studies related to juvenile salmonid passage at Bonneville Dam. The studies were conducted between April 16 and July 15, 2005, encompassing most of the spring and summer migrations. Studies included evaluations of (1) Project fish passage efficiency and other major passage metrics, and (2) smolt approach and fate at B1 Sluiceway Outlet 3C from the B1 forebay. Some of the large appendices are only presented on the compact disk (CD) that accompanies the final report. Examples include six large comma-separated-variable (.CSV) files of hourly fish passage, hourly variances, and Project operations for spring and summer from Appendix E, and large Audio Video Interleave (AVI) files with DIDSON-movie clips of the area upstream of B1 Sluiceway Outlet 3C (Appendix H). Those video clips show smolts approaching the outlet, predators feeding on smolts, and vortices that sometimes entrained approaching smolts into turbines. The CD also includes Adobe Acrobat Portable Document Files (PDF) of the entire report and appendices.
Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; Lundquist, Julie K.; Kosovic, Branko; Draxl, Caroline; Churchfield, Matthew J.
2015-08-27
In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of these changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σ_{u}, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σ_{u} , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.
Multiple-point statistical prediction on fracture networks at Yucca Mountain
Liu, X.Y; Zhang, C.Y.; Liu, Q.S.; Birkholzer, J.T.
2009-05-01
In many underground nuclear waste repository systems, such as at Yucca Mountain, water flow rate and amount of water seepage into the waste emplacement drifts are mainly determined by hydrological properties of fracture network in the surrounding rock mass. Natural fracture network system is not easy to describe, especially with respect to its connectivity which is critically important for simulating the water flow field. In this paper, we introduced a new method for fracture network description and prediction, termed multi-point-statistics (MPS). The process of the MPS method is to record multiple-point statistics concerning the connectivity patterns of a fracture network from a known fracture map, and to reproduce multiple-scale training fracture patterns in a stochastic manner, implicitly and directly. It is applied to fracture data to study flow field behavior at the Yucca Mountain waste repository system. First, the MPS method is used to create a fracture network with an original fracture training image from Yucca Mountain dataset. After we adopt a harmonic and arithmetic average method to upscale the permeability to a coarse grid, THM simulation is carried out to study near-field water flow in the surrounding waste emplacement drifts. Our study shows that connectivity or patterns of fracture networks can be grasped and reconstructed by MPS methods. In theory, it will lead to better prediction of fracture system characteristics and flow behavior. Meanwhile, we can obtain variance from flow field, which gives us a way to quantify model uncertainty even in complicated coupled THM simulations. It indicates that MPS can potentially characterize and reconstruct natural fracture networks in a fractured rock mass with advantages of quantifying connectivity of fracture system and its simulation uncertainty simultaneously.
Towards an Optimal Gradient-dependent Energy Functional of the PZ-SIC Form
Jónsson, Elvar Örn; Lehtola, Susi; Jónsson, Hannes
2015-06-01
Results of Perdew–Zunger self-interaction corrected (PZ-SIC) density functional theory calculations of the atomization energy of 35 molecules are compared to those of high-level quantum chemistry calculations. While the PBE functional, which is commonly used in calculations of condensed matter, is known to predict on average too high atomization energy (overbinding of the molecules), the application of PZ-SIC gives a large overcorrection and leads to significant underestimation of the atomization energy. The exchange enhancement factor that is optimal for the generalized gradient approximation within the Kohn-Sham (KS) approach may not be optimal for the self-interaction corrected functional. The PBEsol functional, where the exchange enhancement factor was optimized for solids, gives poor results for molecules in KS but turns out to work better than PBE in PZ-SIC calculations. The exchange enhancement is weaker in PBEsol and the functional is closer to the local density approximation. Furthermore, the drop in the exchange enhancement factor for increasing reduced gradient in the PW91 functional gives more accurate results than the plateaued enhancement in the PBE functional. A step towards an optimal exchange enhancement factor for a gradient dependent functional of the PZ-SIC form is taken by constructing an exchange enhancement factor that mimics PBEsol for small values of the reduced gradient, and PW91 for large values. The average atomization energy is then in closer agreement with the high-level quantum chemistry calculations, but the variance is still large, the F_{2} molecule being a notable outlier.
Mohamed, Alina Rahayu; Hamzah, Zainab; Daud, Mohamed Zulkali Mohamed
2014-07-10
The production of crude palm oil from the processing of palm fresh fruit bunches in the palm oil mills in Malaysia hs resulted in a huge quantity of empty fruit bunch (EFB) accumulated. The EFB was used as a feedstock in the pyrolysis process using a fixed-bed reactor in the present study. The optimization of process parameters such as pyrolysis temperature (factor A), biomass particle size (factor B) and holding time (factor C) were investigated through Central Composite Design (CCD) using Stat-Ease Design Expert software version 7 with bio-oil yield considered as the response. Twenty experimental runs were conducted. The results were completely analyzed by Analysis of Variance (ANOVA). The model was statistically significant. All factors studied were significant with p-values < 0.05. The pyrolysis temperature (factor A) was considered as the most significant parameter because its F-value of 116.29 was the highest. The value of R{sup 2} was 0.9564 which indicated that the selected factors and its levels showed high correlation to the production of bio-oil from EFB pyrolysis process. A quadratic model equation was developed and employed to predict the highest theoretical bio-oil yield. The maximum bio-oil yield of 46.2 % was achieved at pyrolysis temperature of 442.15 °C using the EFB particle size of 866 μm which corresponded to the EFB particle size in the range of 710–1000 μm and holding time of 483 seconds.
Diversity combining in laser Doppler vibrometry for improved signal reliability
Drbenstedt, Alexander
2014-05-27
Because of the speckle nature of the light reflected from rough surfaces the signal quality of a vibrometer suffers from varying signal power. Deep signal outages manifest themselves as noise bursts and spikes in the demodulated velocity signal. Here we show that the signal quality of a single point vibrometer can be substantially improved by diversity reception. This concept is widely used in RF communication and can be transferred into optical interferometry. When two statistically independent measurement channels are available which measure the same motion on the same spot, the probability for both channels to see a signal drop-out at the same time is very low. We built a prototype instrument that uses polarization diversity to constitute two independent reception channels that are separately demodulated into velocity signals. Send and receive beams go through different parts of the aperture so that the beams can be spatially separated. The two velocity channels are mixed into one more reliable signal by a PC program in real time with the help of the signal power information. An algorithm has been developed that ensures a mixing of two or more channels with minimum resulting variance. The combination algorithm delivers also an equivalent signal power for the combined signal. The combined signal lacks the vast majority of spikes that are present in the raw signals and it extracts the true vibration information present in both channels. A statistical analysis shows that the probability for deep signal outages is largely decreased. A 60 fold improvement can be shown. The reduction of spikes and noise bursts reduces the noise in the spectral analysis of vibrations too. Over certain frequency bands a reduction of the noise density by a factor above 10 can be shown.
Martin, Spencer; Rodrigues, George; Department of Epidemiology Patil, Nikhilesh; Bauman, Glenn; Department of Radiation Oncology, London Regional Cancer Program, London ; D'Souza, David; Sexton, Tracy; Palma, David; Louie, Alexander V.; Khalvati, Farzad; Tizhoosh, Hamid R.; Segasist Technologies, Toronto, Ontario ; Gaede, Stewart
2013-01-01
Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual, N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.
1997-12-31
The US DOE has initiated a program for advanced turbine systems (ATS) that will serve industrial power generation markets. The ATS will provide ultra-high efficiency, environmental superiority, and cost competitiveness. The ATS will foster (1) early market penetration that enhances the global competitiveness of US industry, (2) public health benefits resulting from reduced exhaust gas emissions of target pollutants, (3) reduced cost of power used in the energy-intensive industrial marketplace and (4) the retention and expansion of the skilled US technology base required for the design, development and maintenance of state-of-the-art advanced turbine products. The Industrial ATS Development and Demonstration program is a multi-phased effort. Solar Turbines Incorporated (Solar) has participated in Phases 1 and 2 of the program. On September 14, 1995 Solar was awarded a Cooperative Agreement for Phases 3 and 4 of the program. Phase 3 of the work is separated into two subphases: Phase 3A entails Component Design and Development Phase 3B will involve Integrated Subsystem Testing. Phase 4 will cover Host Site Testing. Forecasts call for completion of the program within budget as originally estimated. Scheduled completion is forecasted to be approximately 3 years late to original plan. This delay has been intentionally planned in order to better match program tasks to the anticipated availability of DOE funds. To ensure the timely realization of DOE/Solar program goals, the development schedule for the smaller system (Mercury 50) and enabling technologies has been maintained, and commissioning of the field test unit is scheduled for May of 2000. As of the end of the reporting period work on the program is 22.80% complete based upon milestones completed. This measurement is considered quite conservative as numerous drawings on the Mercury 50 are near release. Variance information is provided in Section 4.0-Program Management.
Robustness analysis of an air heating plant and control law by using polynomial chaos
Coln, Diego; Ferreira, Murillo A. S.; Bueno, tila M.; Balthazar, Jos M.; Rosa, Sulia S. R. F. de
2014-12-10
This paper presents a robustness analysis of an air heating plant with a multivariable closed-loop control law by using the polynomial chaos methodology (MPC). The plant consists of a PVC tube with a fan in the air input (that forces the air through the tube) and a mass flux sensor in the output. A heating resistance warms the air as it flows inside the tube, and a thermo-couple sensor measures the air temperature. The plant has thus two inputs (the fan's rotation intensity and heat generated by the resistance, both measured in percent of the maximum value) and two outputs (air temperature and air mass flux, also in percent of the maximal value). The mathematical model is obtained by System Identification techniques. The mass flux sensor, which is nonlinear, is linearized and the delays in the transfer functions are properly approximated by non-minimum phase transfer functions. The resulting model is transformed to a state-space model, which is used for control design purposes. The multivariable robust control design techniques used is the LQG/LTR, and the controllers are validated in simulation software and in the real plant. Finally, the MPC is applied by considering some of the system's parameters as random variables (one at a time, and the system's stochastic differential equations are solved by expanding the solution (a stochastic process) in an orthogonal basis of polynomial functions of the basic random variables. This method transforms the stochastic equations in a set of deterministic differential equations, which can be solved by traditional numerical methods (That is the MPC). Statistical data for the system (like expected values and variances) are then calculated. The effects of randomness in the parameters are evaluated in the open-loop and closed-loop pole's positions.
Dahmardeh, M.; Forghani, F.; Khammari, E.
2008-01-30
Out of three grain of the world, Corn is one of the best, About 7 to 10 thousand years ago in south of Mexico corn become domesticated. In the year 1995 culfivation of corn in the world was 130 mil/ha, and to Total production of the world of corn is 507 M/Tons. Average yield of corn in the year 1995 Among Producer countries was 7.78 To 7.60 t/ha in fance and united state was state was 2.36 To 2.20 t/ha, but in Brazil and Mexico Production of corn was different. With this regards, special manner has been arranged for the suitable cultivation or suitable density plants in one heactar on cultivation variety of K.S.C 704 corn. Also suitable level of Nitrogen manure, this Protect in climatic condition of Sistan region done, sith complete block design with 3 replication. Experiment has been selected as split plot, the main plot with 4 different concentration level such as (200-250-3500 and 350 Kg/ha) and sub plot density with 3 different level such as 111000,83000 and 66000 plan/ha respectively. From stage growth up to harvesting of corn in this reache having Data for each treat. ment, After harvesting Analysis of variance and companion of Average of each treatment has been done by DunKan method. Results has been shown, Measurment of characteristics (yield component) seed yield effected different density level of manure, with increasing of manure weight of one thousand seed yield and also in high density showed high significant differente amoung each other. These are with suitable climatic condition of sistan region if enough water will be available ed using Amount of 350 ks/ha Nitrogen manure and with density 111000 plants/ha we can product suitable seed yield Biological yield.
A NEW ALGORITHM FOR RADIOISOTOPE IDENTIFICATION OF SHIELDED AND MASKED SNM/RDD MATERIALS
Jeffcoat, R.
2012-06-05
Detection and identification of shielded and masked nuclear materials is crucial to national security, but vast borders and high volumes of traffic impose stringent requirements for practical detection systems. Such tools must be be mobile, and hence low power, provide a low false alarm rate, and be sufficiently robust to be operable by non-technical personnel. Currently fielded systems have not achieved all of these requirements simultaneously. Transport modeling such as that done in GADRAS is able to predict observed spectra to a high degree of fidelity; our research is focusing on a radionuclide identification algorithm that inverts this modeling within the constraints imposed by a handheld device. Key components of this work include incorporation of uncertainty as a function of both the background radiation estimate and the hypothesized sources, dimensionality reduction, and nonnegative matrix factorization. We have partially evaluated performance of our algorithm on a third-party data collection made with two different sodium iodide detection devices. Initial results indicate, with caveats, that our algorithm performs as good as or better than the on-board identification algorithms. The system developed was based on a probabilistic approach with an improved approach to variance modeling relative to past work. This system was chosen based on technical innovation and system performance over algorithms developed at two competing research institutions. One key outcome of this probabilistic approach was the development of an intuitive measure of confidence which was indeed useful enough that a classification algorithm was developed based around alarming on high confidence targets. This paper will present and discuss results of this novel approach to accurately identifying shielded or masked radioisotopes with radiation detection systems.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; Lundquist, Julie K.; Kosovic, Branko; Draxl, Caroline; Churchfield, Matthew J.
2015-08-27
In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of thesemore » changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σu, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σu , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.« less
The importance of retaining a phylogenetic perspective in traits-based community analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Poteat, Monica D.; Buchwalter, David B.; Jacobus, Luke M.
2015-04-08
1) Many environmental stressors manifest their effects via physiological processes (traits) that can differ significantly among species and species groups. We compiled available data for three traits related to the bioconcentration of the toxic metal cadmium (Cd) from 42 aquatic insect species representing orders Ephemeroptera (mayfly), Plecoptera (stonefly), and Trichoptera (caddisfly). These traits included the propensity to take up Cd from water (uptake rate constant, ku), the ability to excrete Cd (efflux rate constant, ke), and the net result of these two processes (bioconcentration factor, BCF). 2) Ranges in these Cd bioaccumulation traits varied in magnitude across lineages (some lineagesmore » had a greater tendency to bioaccumulate Cd than others). Overlap in the ranges of trait values among different lineages was common and highlights situations where species from different lineages can share a similar trait state, but represent the high end of possible physiological values for one lineage and the low end for another. 3) Variance around the mean trait state differed widely across clades, suggesting that some groups (e.g., Ephemerellidae) are inherently more variable than others (e.g., Perlidae). Thus, trait variability/lability is at least partially a function of lineage. 4) Akaike information criterion (AIC) comparisons of statistical models were more often driven by clade than by other potential biological or ecological explanation tested. Clade-driven models generally improved with increasing taxonomic resolution. 5) Altogether, these findings suggest that lineage provides context for the analysis of species traits, and that failure to consider lineage in community-based analysis of traits may obscure important patterns of species responses to environmental change.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vrettas, Michail D.; Fung, Inez Y.
2015-12-01
Preferential flow through weathered bedrock leads to rapid rise of the water table after the first rainstorms and significant water storage (also known as ‘‘rock moisture’’) in the fractures. We present a new parameterization of hydraulic conductivity that captures the preferential flow and is easy to implement in global climate models. To mimic the naturally varying heterogeneity with depth in the subsurface, the model represents the hydraulic conductivity as a product of the effective saturation and a background hydraulic conductivity Kbkg, drawn from a lognormal distribution. The mean of the background Kbkg decreases monotonically with depth, while its variance reducesmore » with the effective saturation. Model parameters are derived by assimilating into Richards’ equation 6 years of 30 min observations of precipitation (mm) and water table depths (m), from seven wells along a steep hillslope in the Eel River watershed in Northern California. The results show that the observed rapid penetration of precipitation and the fast rise of the water table from the well locations, after the first winter rains, are well captured with the new stochastic approach in contrast to the standard van Genuchten model of hydraulic conductivity, which requires significantly higher levels of saturated soils to produce the same results. ‘‘Rock moisture,’’ the moisture between the soil mantle and the water table, comprises 30% of the moisture because of the great depth of the weathered bedrock layer and could be a potential source of moisture to sustain trees through extended dry periods. Furthermore, storage of moisture in the soil mantle is smaller, implying less surface runoff and less evaporation, with the proposed new model.« less
Species interactions differ in their genetic robustness
Chubiz, Lon M.; Granger, Brian R.; Segre, Daniel; Harcombe, William R.
2015-04-14
Conflict and cooperation between bacterial species drive the composition and function of microbial communities. Stability of these emergent properties will be influenced by the degree to which species' interactions are robust to genetic perturbations. We use genome-scale metabolic modeling to computationally analyze the impact of genetic changes when Escherichia coli and Salmonella enterica compete, or cooperate. We systematically knocked out in silico each reaction in the metabolic network of E. coli to construct all 2583 mutant stoichiometric models. Then, using a recently developed multi-scale computational framework, we simulated the growth of each mutant E. coli in the presence of S. enterica. The type of interaction between species was set by modulating the initial metabolites present in the environment. We found that the community was most robust to genetic perturbations when the organisms were cooperating. Species ratios were more stable in the cooperative community, and community biomass had equal variance in the two contexts. Additionally, the number of mutations that have a substantial effect is lower when the species cooperate than when they are competing. In contrast, when mutations were added to the S. enterica network the system was more robust when the bacteria were competing. These results highlight the utility of connecting metabolic mechanisms and studies of ecological stability. Cooperation and conflict alter the connection between genetic changes and properties that emerge at higher levels of biological organization.
Peters, Junenette L. Patricia Fabian, M. Levy, Jonathan I.
2014-07-15
High blood pressure is associated with exposure to multiple chemical and non-chemical risk factors, but epidemiological analyses to date have not assessed the combined effects of both chemical and non-chemical stressors on human populations in the context of cumulative risk assessment. We developed a novel modeling approach to evaluate the combined impact of lead, cadmium, polychlorinated biphenyls (PCBs), and multiple non-chemical risk factors on four blood pressure measures using data for adults aged ≥20 years from the National Health and Nutrition Examination Survey (1999–2008). We developed predictive models for chemical and other stressors. Structural equation models were applied to account for complex associations among predictors of stressors as well as blood pressure. Models showed that blood lead, serum PCBs, and established non-chemical stressors were significantly associated with blood pressure. Lead was the chemical stressor most predictive of diastolic blood pressure and mean arterial pressure, while PCBs had a greater influence on systolic blood pressure and pulse pressure, and blood cadmium was not a significant predictor of blood pressure. The simultaneously fit exposure models explained 34%, 43% and 52% of the variance for lead, cadmium and PCBs, respectively. The structural equation models were developed using predictors available from public data streams (e.g., U.S. Census), which would allow the models to be applied to any U.S. population exposed to these multiple stressors in order to identify high risk subpopulations, direct intervention strategies, and inform public policy. - Highlights: • We evaluated joint impact of chemical and non-chemical stressors on blood pressure. • We built predictive models for lead, cadmium and polychlorinated biphenyls (PCBs). • Our approach allows joint evaluation of predictors from population-specific data. • Lead, PCBs and established non-chemical stressors were related to blood pressure. • Framework allows cumulative risk assessment in specific geographic settings.
No-migration determination. Annual report, September 1, 1993--August 31, 1994
Not Available
1994-11-01
This report fulfills the annual reporting requirement as specified in the Conditional No-Migration Determination (NMD) for the U.S. Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP), published in the Federal Register on November 14, 1990 (EPA, 1990a). This report covers the project activities, programs, and data obtained during the period September 1, 1993, through August 31, 1994, to support compliance with the NMD`. In the NMD, the U.S. Environmental Protection Agency (EPA) concluded that the DOE had demonstrated, to a reasonable degree of certainty, that hazardous constituents will not migrate from the WIPP disposal unit during the test phase of the project, and that the DOE had otherwise met the requirements of 40 CFR Part 268.6, Petitions to Allow Land Disposal of a Waste Prohibited Under Subpart C of Part 268 (EPA, 1986a), for the WIPP facility. By granting the NMD, the EPA has allowed the DOE to temporarily manage defense-generated transuranic (TRU) mixed wastes, some of which are prohibited from land disposal by Title 40 CFR Part 268, Land Disposal Restrictions (EPA, 1986a), at the WIPP facility for the purposes of testing and experimentation for a period not to exceed 10 years. In granting the NMD, the EPA imposed several conditions on the management of the experimental waste used during the WIPP test phase. One of these conditions is that the DOE submit annual reports to the EPA to demonstrate the WIPP`s compliance with the requirements of the NMD. In the proposed No-Migration Variance (EPA, 1990b) and the final NMD, the EPA defined the content and parameters that must be reported on an annual basis. These reporting requirements are summarized and are cross-referenced with the sections of the report that satisfy the respective requirement.
Preemptible I/O Scheduling of Garbage Collection for Solid State Drives
Lee, Junghee; Kim, Youngjae; Shipman, Galen M; Oral, H Sarp; Kim, Jongman
2012-01-01
Abstract Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governed by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptible GC scheme that allows GC processing to be preempted while pending I/O requests in the queue are serviced. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semipreemptible GC scheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptible GC scheme. In addition, we explore opportunities of a new NAND flash device that supports suspend/resume commands for read, write and erase operations for fully preemptible GC. Our experiments with a fully preemptible GC enabled flash device show that request response time can be improved by up to 14.57% compared to semi-preemptible GC.
Modeling and comparative assessment of municipal solid waste gasification for energy production
Arafat, Hassan A. Jijakli, Kenan
2013-08-15
Highlights: Study developed a methodology for the evaluation of gasification for MSW treatment. Study was conducted comparatively for USA, UAE, and Thailand. Study applies a thermodynamic model (Gibbs free energy minimization) using the Gasify software. The energy efficiency of the process and the compatibility with different waste streams was studied. - Abstract: Gasification is the thermochemical conversion of organic feedstocks mainly into combustible syngas (CO and H{sub 2}) along with other constituents. It has been widely used to convert coal into gaseous energy carriers but only has been recently looked at as a process for producing energy from biomass. This study explores the potential of gasification for energy production and treatment of municipal solid waste (MSW). It relies on adapting the theory governing the chemistry and kinetics of the gasification process to the use of MSW as a feedstock to the process. It also relies on an equilibrium kinetics and thermodynamics solver tool (Gasify) in the process of modeling gasification of MSW. The effect of process temperature variation on gasifying MSW was explored and the results were compared to incineration as an alternative to gasification of MSW. Also, the assessment was performed comparatively for gasification of MSW in the United Arab Emirates, USA, and Thailand, presenting a spectrum of socioeconomic settings with varying MSW compositions in order to explore the effect of MSW composition variance on the products of gasification. All in all, this study provides an insight into the potential of gasification for the treatment of MSW and as a waste to energy alternative to incineration.
Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code
Barenghi, S.; Boarin, S.; Ricotti, M. E.
2012-07-01
Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment risk arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)
Air-injection testing in vertical boreholes in welded and nonwelded Tuff, Yucca Mountain, Nevada
LeCain, G.D.
1997-12-31
Air-injection tests, by use of straddle packers, were done in four vertical boreholes (UE-25 UZ-No.16, USW SD-12, USW NRG-6, and USW NRG-7a) at Yucca Mountain, Nevada. The geologic units tested were the Tiva Canyon Tuff, nonwelded tuffs of the Paintbrush Group, Topopah Spring Tuff, and Calico Hills Formation. Air-injection permeability values of the Tiva Canyon Tuff ranged from 0.3 x 10{sup -12} to 54.0 x 10{sup -12} m{sup 2}(square meter). Air-injection permeability values of the Paintbrush nonwelded tuff ranged from 0.12 x 10{sup -12} to 3.0 x 10{sup -12} m{sup 2}. Air-injection permeability values of the Topopah Spring Tuff ranged from 0.02 x 10{sup -12} to 33.0 x 10{sup -12} m{sup 2}. The air-injection permeability value of the only Calico Hills Formation interval tested was 0.025 x 10{sup -12} m{sup 2}. The shallow test intervals of the Tiva Canyon Tuff had the highest air-injection permeability values. Variograms of the air-injection permeability values of the Topopah Spring Tuff show a hole effect; an initial increase in the variogram values is followed by a decrease. The hole effect is due to the decrease in permeability with depth identified in several geologic zones. The hole effect indicates some structural control of the permeability distribution, possibly associated with the deposition and cooling of the tuff. Analysis of variance indicates that the air-injection permeability values of borehole NRG-7a of the Topopah Spring Tuff are different from the other boreholes; this indicates areal variation in permeability.
Seong W. Lee
2003-09-01
During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalized room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.