Variance Fact Sheet. A variance is an exception to compliance with some part of a safety and health standard granted by the Department of Energy (DOE) to a contractor
Moster, Benjamin P.; Rix, Hans-Walter [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany); Somerville, Rachel S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Newman, Jeffrey A., E-mail: moster@mpia.de, E-mail: rix@mpia.de, E-mail: somerville@stsci.edu, E-mail: janewman@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara Street, Pittsburgh, PA 15260 (United States)
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
As Grantees update and revise their field standards to align with the SWS, they may discover certain specifications that cannot be implemented precisely as described in the relevant SWS. In such cases, Grantees may request a variance from the relevant SWS.
Nuclear Material Variance Calculation
Energy Science and Technology Software Center (OSTI)
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)
Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)
U.S. Energy Information Administration (EIA) (indexed site)
File 1: Summary File (cb86f01.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. Metropolitan statistical area MSA3 25- 25 $MSA. Climate zone CLIMATE3 27- 27 $CLIMAT. B-1 Square footage SQFT3 29- 35 COMMA14. B-2 Square footage SQFTC3 37- 38 $SQFTC.
U.S. Energy Information Administration (EIA) (indexed site)
File 2: Building Activity (cb86f02.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. B-3 Any residential use RESUSE3 28- 28 $YESNO. B-4 Percent residential RESPC3 30- 30 $RESPC. Principal building activity PBA3 32-
U.S. Energy Information Administration (EIA) (indexed site)
File 4: Building Shell, Equipment, Energy Audits, and "Ohter" Conservation Features (cb86f04.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year
U.S. Energy Information Administration (EIA) (indexed site)
File 7: HVAC, Lighting, and Building Shell Conservation Features (cb86f07.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed
U.S. Energy Information Administration (EIA) (indexed site)
File 8: Electricity (cb86f08.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC. Electricity supplied
U.S. Energy Information Administration (EIA) (indexed site)
3: Imputation Flags for Energy Audits, "Other" Conservation Features, and End Uses (cb86f13.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year
U.S. Energy Information Administration (EIA) (indexed site)
4: Imputation Flags for HVAC, Lighting and Shell Conservation Features (cb86f14.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was
T:\ClearanceEMEUConsumption\cbecs\pubuse86\txt\cb86sasfmt&layout.txt
U.S. Energy Information Administration (EIA) (indexed site)
6/txt/cb86sasfmt&layout.txt[3/17/2009 4:43:14 PM] File 1: Summary File (cb86f01.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. Metropolitan statistical area MSA3 25- 25 $MSA. Climate zone CLIMATE3 27- 27 $CLIMAT. B-1 Square footage SQFT3 29- 35
EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports...
Office of Environmental Management (EM)
4 PARSII Analysis: Variance Reports EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports This EVMS Training Snippet, sponsored by the Office of Project Management (PM) is ...
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The #27;σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Hawaii Variance from Pollution Control Permit Packet (Appendix...
Open Energy Information (Open El) [EERE & EIA]
Variance from Pollution Control Permit Packet (Appendix S-13) Jump to: navigation, search OpenEI Reference LibraryAdd to library PermittingRegulatory Guidance - Supplemental...
A Hybrid Variance Reduction Method Based on Gaussian Process...
U.S. Department of Energy (DOE) all webpages (Extended Search)
to accelerate the convergence of Monte Carlo (MC) simulation. Hybrid deterministic-MC methods 1, 2, 3 have been recently developed to achieve the goal of global variance...
Hawaii Application for Community Noise Variance (DOH Form) |...
Open Energy Information (Open El) [EERE & EIA]
Application for Community Noise Variance Organization State of Hawaii Department of Health Published Publisher Not Provided, 072013 DOI Not Provided Check for DOI availability:...
Reduction of Emission Variance by Intelligent Air Path Control
This poster describes an air path control concept, which minimizes NOx and PM emission variance while having the ability to run reliably with many different sensor configurations.
Variance control in weak-value measurement pointers
Parks, A. D.; Gray, J. E.
2011-07-15
The variance of an arbitrary pointer observable is considered for the general case that a complex weak value is measured using a complex valued pointer state. For the typical cases where the pointer observable is either its position or momentum, the associated expressions for the pointer's variance after the measurement contain a term proportional to the product of the weak value's imaginary part with the rate of change of the third central moment of position relative to the initial pointer state just prior to the time of the measurement interaction when position is the observable--or with the initial pointer state's third central moment of momentum when momentum is the observable. These terms provide a means for controlling pointer position and momentum variance and identify control conditions which, when satisfied, can yield variances that are smaller after the measurement than they were before the measurement. Measurement sensitivities which are useful for estimating weak-value measurement accuracies are also briefly discussed.
Hawaii Guide for Filing Community Noise Variance Applications...
Open Energy Information (Open El) [EERE & EIA]
Applications. State of Hawaii. Guide for Filing Community Noise Variance Applications. 4p. GuideHandbook sent to Retrieved from "http:en.openei.orgwindex.php?titleHawaiiGu...
Smoothing method aids gas-inventory variance trending
Mason, R.G. )
1992-03-23
This paper reports on a method for determining gas-storage inventory and variance in a natural-gas storage field which uses the equations developed to determine gas-in-place in a production field. The calculations use acquired data for shut-in pressures, reservoir pore volume, and storage gas properties. These calculations are then graphed and trends are developed. Evaluating trends in inventory variance can be enhanced by use of a technique, described here, that smooths the peaks and valleys of an inventory-variance curve. Calculations using the acquired data determine inventory for a storage field whose drive mechanism is gas expansion (that is, volumetric). When used for a dry gas, condensate, or gas-condensate reservoir, the formulas require no further modification. Inventory in depleted oil fields can be determined in this same manner, as well. Some additional calculations, however, must be made to assess the influence of oil production on the gas-storage process.
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Development of a treatability variance guidance document for US DOE mixed-waste streams
Scheuer, N.; Spikula, R. ); Harms, T. . Environmental Guidance Div.); Triplett, M.B. )
1990-03-01
In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs.
Verification of the history-score moment equations for weight-window variance reduction
Solomon, Clell J; Sood, Avneet; Booth, Thomas E; Shultis, J. Kenneth
2010-12-06
The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,
EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports | Department
Office of Environmental Management (EM)
of Energy 4 PARSII Analysis: Variance Reports EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports This EVMS Training Snippet, sponsored by the Office of Project Management (PM) is one in a series regarding PARS II Analysis reports. PARS II offers direct insight into EVM project data from the contractor's internal systems. The reports were developed with the users in mind, organized and presented in an easy to follow manner, with analysis results and key information to determine the
Technical criteria for an Area-Of-Review variance methodology. Appendix B
1994-01-01
This guidance was developed by the Underground Injection Practices Research Foundation to assist Underground Injection Control Directors in implementing proposed changes to EPA`s Class 2 Injection Well Regulations that will apply the Area-Of-Review (AOR) requirement to previously exempt wells. EPA plans to propose amendments this year consistent with the recommendations in the March 23, 1992, Final Document developed by the Class 2 Injection Well Advisory Committee, that will require AORs to be performed on all Class 2 injection wells except those covered by previously conducted AORs and those located in areas that have been granted a variance. Variances may be granted if the Director determines that there is a sufficiently low risk of upward fluid movement from the injection zone that could endanger underground sources of drinking water. This guidance contains suggested technical criteria for identifying areas eligible for an AOR variance. The suggested criteria were developed in consultation with interested States and representatives from EPA, industry and the academic community. Directors will have six months from the promulgation of the new regulations to provide EPA with either a schedule for performing AOR`s within five years on all wells not covered by previously conducted AORs, or notice of their intent to establish a variance program. It is believed this document will provide valuable assistance to Directors who are considering whether to establish a variance program or have begun early preparations to develop such a program.
Jones, Terry R; Koenig, Gregory A
2010-01-01
We present a new software-based clock synchronization scheme designed to provide high precision time agreement among distributed memory nodes. The technique is designed to minimize variance from a reference chimer during runtime and with minimal time-request latency. Our scheme permits initial unbounded variations in time and corrects both slow and fast chimers (clock skew). An implementation developed within the context of the MPI message passing interface is described and time coordination measurements are presented. Among our results, the mean time variance among a set of nodes improved from 20.0 milliseconds under standard Network Time Protocol (NTP) to 2.29 secs under our scheme.
Jones, Terry R; Koenig, Gregory A
2013-01-01
We present a new software-based clock synchronization scheme that provides high precision time agreement among distributed memory nodes. The technique is designed to minimize variance from a reference chimer during runtime and with minimal time-request latency. Our scheme permits initial unbounded variations in time and corrects both slow and fast chimers (clock skew). An implementation developed within the context of the MPI message passing interface is described, and time coordination measurements are presented. Among our results, the mean time variance for a set of nodes improved from 20.0 milliseconds under standard Network Time Protocol (NTP) down to 2.29 secs under our scheme.
ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator
Energy Science and Technology Software Center (OSTI)
2015-08-17
Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versionsmore » of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).« less
An area-of-review variance study of the East Texas field
Warner, D.L.; Koederitz, L.F.; Laudon, R.C.; Dunn-Norman, S.
1996-12-31
The East Texas oil field, discovered in 1930 and located principally in Gregg and Rusk Counties, is the largest oil field in the conterminous United States. Nearly 33,000 wells are known to have been drilled in the field. The field has been undergoing water injection for pressure maintenance since 1938. As of today, 104 Class II salt-water disposal wells, operated by the East Texas Salt Water Disposal Company, are returning all produced water to the Woodbine producing reservoir. About 69 of the presently existing wells have not been subjected to US Environmental Protection Agency Area-of-Review (AOR) requirements. A study has been carried out of opportunities for variance from AORs for these existing wells and for new wells that will be constructed in the future. The study has been based upon a variance methodology developed at the University of Missouri-Rolla under sponsorship of the American Petroleum Institute and in coordination with the Ground Water Protection Council. The principal technical objective of the study was to determine if reservoir pressure in the Woodbine producing reservoir is sufficiently low so that flow of salt-water from the Woodbine into the Carrizo-Wilcox ground water aquifer is precluded. The study has shown that the Woodbine reservoir is currently underpressured relative to the Carrizo-Wilcox and will remain so over the next 20 years. This information provides a logical basis for a variance for the field from performing AORs.
Evaluation of area of review variance opportunities for the East Texas field. Annual report
Warner, D.L.; Koederitz, L.F.; Laudon, R.C.; Dunn-Norman, S.
1995-05-01
The East Texas oil field, discovered in 1930 and located principally in Gregg and Rusk Counties, is the largest oil field in the conterminous United States. Nearly 33,000 wells are known to have been drilled in the field. The field has been undergoing water injection for pressure maintenance since 1938. As of today, 104 Class II salt-water disposal wells, operated by the East Texas Salt Water Disposal Company, are returning all produced water to the Woodbine producing reservoir. About 69 of the presently existing wells have not been subjected to U.S. Environmental Protection Agency Area-of-Review (AOR) requirements. A study has been carried out of opportunities for variance from AORs for these existing wells and for new wells that will be constructed in the future. The study has been based upon a variance methodology developed at the University of Missouri-Rolla under sponsorship of the American Petroleum Institute and in coordination with the Ground Water Protection Council. The principal technical objective of the study was to determine if reservoir pressure in the Woodbine producing reservoir is sufficiently low so that flow of salt-water from the Woodbine into the Carrizo-Wilcox ground water aquifer is precluded. The study has shown that the Woodbine reservoir is currently underpressured relative to the Carrizo-Wilcox and will remain so over the next 20 years. This information provides a logical basis for a variance for the field from performing AORs.
Scheuer, N.; Spikula, R. ); Harms, T. . Environmental Guidance Div.); Triplett, M.B. )
1990-02-01
In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a guidance manual was prepared. The guidance manual is for use by DOE facilities and operations offices in obtaining variances from the RCRA LDR treatment standards. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA LDRs. The manual addresses treatability variances and equivalent treatment variances. A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. An equivalent treatment variance is granted by EPA that allows treatment of a restricted waste by a process that differs from that specified in the standards, but achieves a level of performance equivalent to the technology specified in the standard. 4 refs.
Waste Isolation Pilot Plant no-migration variance petition. Executive summary
Not Available
1990-12-31
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ``no-migration`` demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989.
Request for Concurrence on Three Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory
No-migration variance petition: Draft. Volume 4, Appendices DIF, GAS, GCR (Volume 1)
1995-05-31
The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2.6 million cubic feet of the se waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 4 of the petition which presents details about the transport characteristics across drum filter vents and polymer bags; gas generation reactions and rates during long-term WIPP operation; and geological characterization of the WIPP site.
Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; McLeod, Derek
2014-12-01
Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ?35% at redshift z ? 7 to ? 65% at z ? 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.
Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.
2012-01-01
Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.
Waste Isolation Pilot Plant No-Migration Variance Petition. Revision 1, Volume 1
Hunt, Arlen
1990-03-01
The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA`s NOD and met with the EPA`s reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0.
Pichugina, Yelena L.; Banta, Robert M.; Kelley, Neil D.; Jonkman, Bonnie J.; Tucker, Sara C.; Newsom, Rob K.; Brewer, W. A.
2008-08-01
Quantitative data on turbulence variables aloft--above the region of the atmosphere conveniently measured from towers--has been an important but difficult measurement need for advancing understanding and modeling of the stable boundary layer (SBL). Vertical profiles of streamwise velocity variances obtained from NOAAs High Resolution Doppler Lidar (HRDL), which have been shown to be numerically equivalent to turbulence kinetic energy (TKE) for stable conditions, are a measure of the turbulence in the SBL. In the present study, the mean horizontal wind component U and variance ?u2 were computed from HRDL measurements of the line-of-sight (LOS) velocity using a technique described in Banta, et al. (2002). The technique was tested on datasets obtained during the Lamar Low-Level Jet Project (LLLJP) carried out in early September 2003, near the town of Lamar in southeastern Colorado. This paper compares U with mean wind speed obtained from sodar and sonic anemometer measurements. It then describes several series of averaging tests that produced the best correlation between TKE calculated from sonic anemometer data at several tower levels and lidar measurements of horizontal velocity variance ?u2. The results show high correlation (0.71-0.97) of the mean U and average wind speed measured by sodar and in-situ instruments, independent of sampling strategies and averaging procedures. Comparison of estimates of variance, on the other hand, proved sensitive to both the spatial and temporal averaging techniques.
No-migration variance petition for the Waste Isolation Pilot Plant
Carnes, R.G.; Hart, J.S. ); Knudtsen, K. )
1990-01-01
The Waste Isolation Pilot Plant (WIPP) is a US Department of Energy (DOE) project to provide a research and development facility to demonstrate the safe disposal of radioactive waste resulting from US defense activities and programs. The DOE is developing the WIPP facility as a deep geologic repository in bedded salt for transuranic (TRU) waste currently stored at or generated by DOE defense installations. Approximately 60 percent of the wastes proposed to be emplaced in the WIPP are radioactive mixed wastes. Because such mixed wastes contain a hazardous chemical component, the WIPP is subject to requirements of the Resource Conservation and Recovery Act (RCRA). In 1984 Congress amended the RCRA with passage of the Hazardous and Solid Waste Amendments (HSWA), which established a stringent regulatory program to prohibit the land disposal of hazardous waste unless (1) the waste is treated to meet treatment standards or other requirements established by the Environmental Protection Agency (EPA) under {section}3004(n), or (2) the EPA determines that compliance with the land disposal restrictions is not required in order to protect human health and the environment. The DOE WIPP Project Office has prepared and submitted to the EPA a no-migration variance petition for the WIPP facility. The purpose of the petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the WIPP facility for as long as the wastes remain hazardous. This paper provides an overview of the petition and describes the EPA review process, including key issues that have emerged during the review. 5 refs.
Approval of a Permanent Variance Regarding Sprinklers and Fire Boundaries in Selected Areas of 22 1 -H Canyon at the Savannah River Site
Approval of a Permanent Variance Regarding Fire Safety in Selected Areas of 221-H Canyon at the Savannah River Site UNDER SECRETARY OF ENERGY
CH2M HG Idaho, LLC, Request for Variance to Title 10 Code of Federal Regulations part 851, "Worker Safety and Health"
Memorandum CH2M WG Idaho, LLC, Request for Variance to Title 10, Code of Federal Regulations Part 851, "Worker Safety and Health Program"
Memorandum Request for Concurrence on firee Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory
Pichugina, Y. L.; Banta, R. M.; Kelley, N. D.; Jonkman, B. J.; Tucker, S. C.; Newsom, R. K.; Brewer, W. A.
2008-08-01
Quantitative data on turbulence variables aloft--above the region of the atmosphere conveniently measured from towers--have been an important but difficult measurement need for advancing understanding and modeling of the stable boundary layer (SBL). Vertical profiles of streamwise velocity variances obtained from NOAA's high-resolution Doppler lidar (HRDL), which have been shown to be approximately equal to turbulence kinetic energy (TKE) for stable conditions, are a measure of the turbulence in the SBL. In the present study, the mean horizontal wind component U and variance {sigma}2u were computed from HRDL measurements of the line-of-sight (LOS) velocity using a method described by Banta et al., which uses an elevation (vertical slice) scanning technique. The method was tested on datasets obtained during the Lamar Low-Level Jet Project (LLLJP) carried out in early September 2003, near the town of Lamar in southeastern Colorado. This paper compares U with mean wind speed obtained from sodar and sonic anemometer measurements. The results for the mean U and mean wind speed measured by sodar and in situ instruments for all nights of LLLJP show high correlation (0.71-0.97), independent of sampling strategies and averaging procedures, and correlation coefficients consistently >0.9 for four high-wind nights, when the low-level jet speeds exceeded 15 m s{sup -1} at some time during the night. Comparison of estimates of variance, on the other hand, proved sensitive to both the spatial and temporal averaging parameters. Several series of averaging tests are described, to find the best correlation between TKE calculated from sonic anemometer data at several tower levels and lidar measurements of horizontal-velocity variance {sigma}{sup 2}{sub u}. Because of the nonstationarity of the SBL data, the best results were obtained when the velocity data were first averaged over intervals of 1 min, and then further averaged over 3-15 consecutive 1-min intervals, with best results
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment
Occupational Medicine Variance Request
U.S. Department of Energy (DOE) all webpages (Extended Search)
... These requirements may vary from state to state, and may include the requirements of the Health Insurance Portability and Assurance Act (HIPAA), which specifies recordkeeping ...
Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.
2010-07-15
Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.
ARM - Publications: Science Team Meeting Documents: Variance...
U.S. Department of Energy (DOE) all webpages (Extended Search)
in shallow cumulus topped mixed layers is studied using large-eddy simulation (LES) results. The simulations are based on a range of different shallow cumulus cases,...
U.S. Energy Information Administration (EIA) (indexed site)
9: Natural Gas (CBECS89.A09) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format CASEID Building identifier BLDGID4 1- 5 Census region REGION4 7- 7 $REGION. Census division CENDIV4 9- 9 $CENDIV. B2 Square footage SQFTC4 11- 12 $SQFTC. Principal building activity PBA4 14- 15 $ACTIVTY. F3 Year construction was completed YRCONC4 17- 18 $YRCONC. P2 Interruptible natural gas service NGINTR4 20- 20 $YESNO. Adjusted weight ADJWT4 22- 29 Variance stratum STRATUM4
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
Permits and Variances for Solar Panels, Calculation of Impervious...
construction, or stormwater may only include the foundation or base supporting the solar panel. The law generally applies statewide, including charter counties and Baltimore...
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M. (Cedar Crest, NM); Ma, Tian J. (Albuquerque, NM)
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
A Clock Synchronization Strategy for Minimizing Clock Variance...
Office of Scientific and Technical Information (OSTI)
Sponsoring Org: SC USDOE - Office of Science (SC) Country of Publication: United States Language: English Subject: Time service; clock synchronization; MPI; supercomputing; system ...
Sample variance in weak lensing: How many simulations are required?
Petri, Andrea; May, Morgan; Haiman, Zoltan
2016-03-24
Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension Nb with its simulated counterpart. An accurate estimate of the Nb × Nb feature covariance matrix C is essential to obtain accurate parameter confidence intervals. When C is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of Nr realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number Ns ≤ Nr of independent ray-tracing N-body simulations. We study parameter confidence intervalsmore » as a function of (Ns, Nr) in the range 1 ≤ Ns ≤ 200 and 1 ≤ Nr ≲ 105. Previous work [S. Dodelson and M. D. Schneider, Phys. Rev. D 88, 063537 (2013)] has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an O(1/Nr) degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional O(1/N2r) error degradation that is not negligible when Nr is only a factor of few larger than Nb. We study the large Nr limit, and find that a single, 240 Mpc/h sized 5123-particle N-body simulation (Ns = 1) can be repeatedly recycled to produce as many as Nr = few × 104 shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. Lastly, a small number of simulations (Ns = 1 or 2) is sufficient to forecast parameter confidence intervals at percent accuracy.« less
A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN
Shine, G.
2014-11-25
The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple mean of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations
No-migration variance petition. Appendices C--J: Volume 5, Revision 1
Not Available
1990-03-01
Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant.
No-migration variance petition. Volume 3, Revision 1: Appendix B, Attachments A through D
Not Available
1990-03-01
Volume III contains the following attachments: TRUPACT-II content codes (TRUCON); TRUPACT-II chemical list; chemical compatibility analysis for Rocky Flats Plant waste forms (Appendix 2.10.12 of TRUPACT-II safety analysis report); and chemical compatibility analyses for waste forms across all sites.
No-migration variance petition. Appendices A--B: Volume 2, Revision 1
Not Available
1990-03-01
Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included.
Illinois Waiver letter on variances from UL ruling on E85 dispensers
Alternative Fuels and Advanced Vehicles Data Center
Waste Isolation Pilot Plant No-migration variance petition. Addendum: Volume 7, Revision 1
Not Available
1990-03-01
This report describes various aspects of the Waste Isolation Pilot Plant (WIPP) including design data, waste characterization, dissolution features, ground water hydrology, natural resources, monitoring, general geology, and the gas generation/test program.
Microsoft PowerPoint - Snippet 5.4 PARS II Analysis-Variance...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
In PARS II under the SSS Reports selection on the left, there are folders to the right. ... warning signs of future problems. To drill down, we need to view other PARS II reports. ...
Orthogonal control of expression mean and variance by epigenetic features at different genomic loci
Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; Schaffer, David V.; Arkin, Adam P.
2015-05-05
While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH and protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.
Orthogonal control of expression mean and variance by epigenetic features at different genomic loci
Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; Schaffer, David V.; Arkin, Adam P.
2015-05-05
While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH andmore » protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.« less
No-migration variance petition. Appendix B, Attachments E--Q: Volume 4, Revision 1
Not Available
1990-03-01
Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program.
Fischer, N.T.
1990-03-01
This document reports data collected as part of the Ecological Monitoring Program (EMP) at the Waste Isolation Pilot Plant near Carlsbad, New Mexico, for Calendar Year 1987. Also included are data from the last quarter (October through December) of 1986. This report divides data collection activities into two parts. Part A covers general environmental monitoring which includes meteorology, aerial photography, air quality monitoring, water quality monitoring, and wildlife population surveillance. Part B focuses on the special studies being performed to evaluate the impacts of salt dispersal from the site on the surrounding ecosystem. The fourth year of salt impact monitoring was completed in 1987. These studies involve the monitoring of soil chemistry, soil microbiota, and vegetation in permanent study plots. None of the findings indicate that the WIPP project is adversely impacting environmental quality at the site. As in 1986, breeding bird censuses completed this year indicate changes in the local bird fauna associated with the WIPP site. The decline in small mammal populations noted in the 1986 census is still evident in the 1987 data; however, populations are showing signs of recovery. There is no indication that this decline is related to WIPP activities. Rather, the evidence indicates that natural population fluctuations may be common in this ecosystem. The salt impact studies continue to reveal some short-range transport of salt dust from the saltpiles. This material accumulates at or near the soil surface during the dry seasons in areas near the saltpiles, but is flushed deeper into the soil during the rainy season. Microbial activity does not appear to be affected by this salt importation. Vegetation coverage and density data from 1987 also do not show any detrimental effect associated with aerial dispersal of salt.
Letter to Joseph N. Herndon from Bruce M. Diamond, Assistant General Counsel for Environment, dated September 19, 2008.
Worseck, Gabor; Xavier Prochaska, J. [Department of Astronomy and Astrophysics, UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); McQuinn, Matthew [Department of Astronomy, University of California, 601 Campbell Hall, Berkeley, CA 94720 (United States); Dall'Aglio, Aldo; Wisotzki, Lutz [Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam (Germany); Fechner, Cora; Richter, Philipp [Institut fuer Physik und Astronomie, Universitaet Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam (Germany); Hennawi, Joseph F. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany); Reimers, Dieter, E-mail: gworseck@ucolick.org [Hamburger Sternwarte, Universitaet Hamburg, Gojenbergsweg 112, 21029 Hamburg (Germany)
2011-06-01
We report on the detection of strongly varying intergalactic He II absorption in HST/COS spectra of two z{sub em} {approx_equal} 3 quasars. From our homogeneous analysis of the He II absorption in these and three archival sightlines, we find a marked increase in the mean He II effective optical depth from <{tau}{sub eff},He{sub ii}>{approx_equal}1 at z {approx_equal} 2.3 to <{tau}{sub eff},He{sub ii}>{approx}>5 at z {approx_equal} 3.2, but with a large scatter of 2{approx}<{tau}{sub eff},He{sub ii}{approx}<5 at 2.7 < z < 3 on scales of {approx}10 proper Mpc. This scatter is primarily due to fluctuations in the He II fraction and the He II-ionizing background, rather than density variations that are probed by the coeval H I forest. Semianalytic models of He II absorption require a strong decrease in the He II-ionizing background to explain the strong increase of the absorption at z {approx}> 2.7, probably indicating He II reionization was incomplete at z{sub reion} {approx}> 2.7. Likewise, recent three-dimensional numerical simulations of He II reionization qualitatively agree with the observed trend only if He II reionization completes at z{sub reion} {approx_equal} 2.7 or even below, as suggested by a large {tau}{sub eff},He{sub ii}{approx}>3 in two of our five sightlines at z < 2.8. By doubling the sample size at 2.7 {approx}< z {approx}< 3, our newly discovered He II sightlines for the first time probe the diversity of the second epoch of reionization when helium became fully ionized.
U.S. Energy Information Administration (EIA) (indexed site)
File02: (file02cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage SQFTC17. BCWM2C Principal activity BCWOM25. ...
Y-12s Training and Technology instructors story ? Terry...
U.S. Department of Energy (DOE) all webpages (Extended Search)
storied about things that took place at TAT. There were people from every race, every religion and every social stratum there so you can imagine. Most of them, however, can't be...
Method for in situ heating of hydrocarbonaceous formations
Little, William E.; McLendon, Thomas R.
1987-01-01
A method for extracting valuable constituents from underground hydrocarbonaceous deposits such as heavy crude tar sands and oil shale is disclosed. Initially, a stratum containing a rich deposit is hydraulically fractured to form a horizontally extending fracture plane. A conducting liquid and proppant is then injected into the fracture plane to form a conducting plane. Electrical excitations are then introduced into the stratum adjacent the conducting plate to retort the rich stratum along the conducting plane. The valuable constituents from the stratum adjacent the conducting plate are then recovered. Subsequently, the remainder of the deposit is also combustion retorted to further recover valuable constituents from the deposit. Various R.F. heating systems are also disclosed for use in the present invention.
EVENT TREE ANALYSIS AT THE SAVANNAH RIVER SITE: A CASE HISTORY
Williams, R
2009-05-25
At the Savannah River Site (SRS), a Department of Energy (DOE) installation in west-central South Carolina there is a unique geologic stratum that exists at depth that has the potential to cause surface settlement resulting from a seismic event. In the past the particular stratum in question has been remediated via pressure grouting, however the benefits of remediation have always been debatable. Recently the SRS has attempted to frame the issue in terms of risk via an event tree or logic tree analysis. This paper describes that analysis, including the input data required.
RAPID/Roadmap/18-HI-d | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Geothermal Hydropower Solar Tools Contribute Contact Us Variance from Pollution Control (18-HI-d) A variance is required to discharge water pollutant in excess of applicable...
Determination of Dusty Particle Charge Taking into Account Ion Drag
Ramazanov, T. S.; Dosbolayev, M. K.; Jumabekov, A. N.; Amangaliyeva, R. Zh.; Orazbayev, S. A.; Petrov, O. F.; Antipov, S. N.
2008-09-07
This work is devoted to the experimental estimation of charge of dust particle that levitates in the stratum of dc glow discharge. Particle charge is determined on the basis of the balance between ion drag force, gravitational and electric forces. Electric force is obtained from the axial distribution of the light intensity of strata.
Impact of federal regulations on the small coal mine in Appalachia. Final report
Davis, B.; Ferrell, R.
1980-11-01
This report contains the results of a study of the total costs of compliance with federal regulations of coal mines in Eastern Kentucky. The mines were stratified by tonnage per year and employment. Mail and personal interview surveys were conducted for each stratum. Survey results attempt to suggest the competitive position of small concerns and to form a basis for necessary modifications in regulations.
Burke, Timothy Patrick; Kiedrowski, Brian; Martin, William R.; Brown, Forrest B.
2015-08-27
KDEs show potential reducing variance for global solutions (flux, reaction rates) when compared to histogram solutions.
Safety and Health Regulatory and Policy Response Line | Department...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
by DOE's Office of the General Counsel and must follow the procedures described in 10 CFR 851. More information on variances to 10 CFR 851 can be found on the eVariance website ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
August August We are your source for reliable, up-to-date news and information; our scientists and engineers can provide technical insights on our innovations for a secure nation. Artist's rendition of a cross section of skin layers (stratum corneum, epidermis and dermis) showing topical application of an ionic liquid for combating a skin-borne bacterial infection. The ionic liquid can be formulated with Breakthrough antibacterial approach could resolve serious skin infections Like a protective
Module 6- Metrics, Performance Measurements and Forecasting
This module reviews metrics such as cost and schedule variance along with cost and schedule performance indices.
Gauging apparatus and method, particularly for controlling mining by a mining machine
Campbell, J.A.; Moynihan, D.J.
1980-04-29
Apparatus for and method are claimed for controlling the mining by a mining machine of a seam of material (e.g., coal) overlying or underlying a stratum of undesired material (e.g., clay) to reduce the quantity of undesired material mined with the desired material, the machine comprising a cutter movable up and down and adapted to cut down into a seam of coal on being lowered. The control apparatus comprises a first electrical signal constituting a slow-down signal adapted to be automatically operated to signal when the cutter has cut down into a seam of desired material generally to a predetermined depth short of the interface between the seam and the underlying stratum for slowing down the cutting rate as the cutter approaches the interface, and a second electrical signal adapted to be automatically operated subsequent to the first signal for signalling when the cutter has cut down through the seam to the interface for stopping the cutting operation, thereby to avoid mining undesired material with the desired material. Similar signalling may be provided on an upward cut to avoid cutting into the overlying stratum.
Parameters Covariance in Neutron Time of Flight Analysis Explicit Formulae
Odyniec, M.; Blair, J.
2014-12-01
We present here a method that estimates the parameters variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
U.S. Energy Information Administration (EIA) (indexed site)
File02: (file02_cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage $SQFTC17. BCWM2C Principal activity $BCWOM25. YRCONC2C Year constructed $YRCONC15 REGION2 Census region $REGION13 XSECWT2 Cross-sectional weight ELSUPL2N Supplier reported electricity use $YESNO15. NGSUPL2N Supplier reported natural gas use $YESNO15. FKSUPL2N Supplier reported fuel oil use $YESNO15. STSUPL2N Supplier reported steam use $YESNO15. PRSUPL2N Supplier
Geothermal Glossary | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Glossary Geothermal Glossary This list contains terms related to geothermal energy and technologies./ A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z A Ambient Natural condition of the environment at any given time. / Aquifer Water-bearing stratum of permeable sand, rock, or gravel./ Back to Top/ B Baseload Plants Electricity-generating units that are operated to meet the constant or minimum load on the system. The cost of energy from such
Detection probabilities for random inspection in variable flow situations
Lu, Ming-Shih
1994-03-01
Improvements in the efficiency and effectiveness of inventory-change verification are necessary at certain nuclear facilities, of which one example is low-enriched uranium fuel fabrication facilities. The Safeguards Criteria suggested carrying out interim inventory-change verifications with randomized inspections. This paper describes randomized inspection schemes for inventory change verifications and evaluates the achievable detection probabilities for realistic plant receipt and shipment schedules and stratum residence times as a. function of the inspection frequency and effort and compares these with the existing inspection strategies.
Geological reasons for rapid water encroachment in wells at Sutorma oil field
Arkhipov, S.V.; Dvorak, S.V.; Sonich, V.P.; Nikolayeva, Ye.V.
1987-12-01
The Sutorma oil field on the northern Surgut dome is one of the new fields in West Siberia. It came into production in 1982, but already by 1983 it was found that the water contents in the fluids produced were much greater than the design values. The adverse effects are particularly pronounced for the main reservoir at the deposit, the BS/sub 10//sup 2/ stratum. Later, similar problems occurred at other fields in the Noyarbr and Purpey regions. It is therefore particularly important to elucidate the geological reasons for water encroachment.
Microsoft PowerPoint - ARMST2007_mp.ppt
U.S. Department of Energy (DOE) all webpages (Extended Search)
height resolved vertical velocity, and turbulence derived from the horizontal variance of radar Doppler velocity Method 1) Identify regions containing cloud liquid (see E. Luke et...
Gasoline and Diesel Fuel Update
Gas Company, for example, on Tuesday, October 21, issued a system overrun limitation (SOL) that allows for penalties on variances between flows and nominations. The SOL is in...
Natural Gas Weekly Update, Printer-Friendly Version
Gas Company, for example, on Tuesday, October 21, issued a system overrun limitation (SOL) that allows for penalties on variances between flows and nominations. The SOL is in...
Sub-daily Statistical Downscaling of Meteorological Variables...
Office of Scientific and Technical Information (OSTI)
and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... Steps 7 - 12 Page 12 OTBOTS Example OTBOTS Example Using SPA * Eliminate Both Cost and Schedule Variances * Least preferred method * May require retroactive changes to in-process ...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
X Cum CPI 3 Period Moving Average 06302013 07312013 08312013 09302013 1031... format * Advantage of this report is Excel Sort feature to view variances from ...
Paducah Public Documents | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
of Kentucky, and the public); and, using maps and figures, summarize the potential PGDP ... vision so that variances between it and the current cleanup strategy can be identified. ...
EVMS Training Snippet: 5.2 PARSII Analysis: Data Validity Reports...
Forms and Templates More Documents & Publications EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports EVMS Training Snippet: 5.5 PARSII Analysis: Trend Reports EVMS ...
Do financial investors destabilize the oil price?
... Moreover, we test whether the ine cient ...nancial trading shock (iii) increased the ... To test for this, we generate the variance decomposition and the historical decomposition ...
ARM - Publications: Science Team Meeting Documents
U.S. Department of Energy (DOE) all webpages (Extended Search)
vertical and horizontal components, variance and vertical flux of the prognostic thermodynamic variables as well as momentum flux are also presented. The most interesting aspect...
Nevada State Environmental Commission | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
variance requests is selected program areas administrated by NDEP as well as ratify air pollution enforcement actions (settlement agreements). Nevada State Environmental...
CV, VAC, & EAC Trends - Management Reserve (MR) Log - Performance Index trends (WBS Level) - Variance Analysis Cumulative (WBS Level) * EAC Reasonableness - CPI vs. TCPI (PMB ...
U.S. Energy Information Administration (EIA) (indexed site)
Proceedings from the ACEEE Summer Study on Energy Efficiency in Buildings, 1992 17. Error terms are heteroscedastic when the variance of the error terms is not constant but,...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... Functional analysis of variance shows that the error of individual modelmore has substantial dependence on the weather situation. The machine-learning approach effectively ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Research Community Climate Model (CCM2). The CSU eterized in terms of the grid cell mean and subgrid RAMS cloud microphysics parameterization predicts mass variance of...
AUDIT REPORT The Department of Energy's Energy Information Technology...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... costs to services provided by EITS, tracks allocation decisions, and maintains historical records, and that has the capability to track budget to execution variance and forecasting ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Estimated Subsequent Reporting Period Accrued Costs g. Planned h. Actual i. Variance j. ... integrated cost, labor, and schedule information for rapid analysis and trend forecasting. ...
Earned Value Management System RM
Office of Environmental Management (EM)
... variances from the plan and forecasting the impacts Providing data to ... The review topics include: Direct costs determination Indirect costs determination ...
Environmental Compliance Performance Scorecard ￢ﾀﾓ Second...
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
FINAL-1ST-QUARTER-FY-2014-SCORECARD-08-23-14.xlsx
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
FINAL-3RD-QUARTER-FY-2013-SCORECARD-01-31-14.xlsx
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
SITE-LEVEL SUMMARY (4Q) of FINAL-4TH-QUARTER-FY-2014-SCORECARD...
Office of Environmental Management (EM)
... FORECAST DATE ACTUAL DATE EA DATE STATUS NARRATIVE VARIANCE NARRATIVE REGULATORY AGREEMENT NAME DESIGNATIONS GREEN SHADED DATES ARE 4TH QUARTER FY 2014 MILESTONES ARRA Project: ...
Environmental Compliance Performance Scorecard ￢ﾀﾓ Fourth...
Office of Environmental Management (EM)
... Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION BASELINE COMPLETION DATE STATUS NARRATIVE VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE FINAL ...
REVISED-FINAL-1ST-QUARTER-FY-2013-SCORECARD-09-05-13.xlsx
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
SITE-LEVEL SUMMARY of FINAL-2ND-QUARTER-FY-2014-SCORECARD-02...
Office of Environmental Management (EM)
... FORECAST DATE ACTUAL DATE EA DATE STATUS NARRATIVE VARIANCE NARRATIVE REGULATORY AGREEMENT NAME DESIGNATIONS GREEN SHADED DATES ARE 2ND QUARTER FY 2014 MILESTONES ARRA Project: ...
Environmental Compliance Performance Scorecard ￢ﾀﾓ Third...
Office of Environmental Management (EM)
... GREEN SHADED DATES ARE 3RD QUARTER FY 2010 MILESTONES ARRA Project: N BRNL-0030-003 ... National Laboratory VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT ...
Environmental Compliance Performance Scorecard ￢ﾀﾓ First...
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE FINAL FIRST QUARTER OF FY ...
FINAL-4TH-QUARTER-FY-2012-SCORECARD-04-10-13.xlsx
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
SITE-LEVEL SUMMARY (2Q) of FINAL-2ND-QUARTER-FY-2015-SCORECARD...
Office of Environmental Management (EM)
... FORECAST DATE ACTUAL DATE EA DATE STATUS NARRATIVE VARIANCE NARRATIVE REGULATORY AGREEMENT NAME DESIGNATIONS GREEN SHADED DATES ARE 2ND QUARTER FY 2015 MILESTONES ARRA Project: ...
SITE-LEVEL SUMMARY of FINAL-4TH-QUARTER-FY-2013-SCORECARD-05...
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
SITE-LEVEL SUMMARY (3Q) of FINAL-3RD-QUARTER-FY-2014-SCORECARD...
Office of Environmental Management (EM)
... FORECAST DATE ACTUAL DATE EA DATE STATUS NARRATIVE VARIANCE NARRATIVE REGULATORY AGREEMENT NAME DESIGNATIONS GREEN SHADED DATES ARE 3RD QUARTER FY 2014 MILESTONES ARRA Project: ...
SITE-LEVEL SUMMARY of FINAL-3RD-QUARTER-FY-2012-SCORECARD-01...
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
Environmental Compliance Performance Scorecard ￢ﾀﾓ First...
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
FINAL-2ND-QUARTER-FY-2013-SCORECARD-12-16-13.xlsx
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE REGULATORY AGREEMENT NAME ...
Environmental Compliance Performance Scorecard ￢ﾀﾓ Fourth...
Office of Environmental Management (EM)
... D&D Fund Deposit, West Valley Demonstration Project MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE FINAL 4TH QUARTER OF FY ...
Module 6 - Metrics, Performance Measurements and Forecasting...
This module reviews metrics such as cost and schedule variance along with cost and schedule performance indices. In addition, this module will outline forecasting tools such as ...
Fuel cell stack monitoring and system control
Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.
2004-02-17
A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell.
National Nuclear Security Administration (NNSA)
... AEC U.S. Atomic Energy Commission AIP Agreement in Principle AIRFA American Indian Religious Freedom Act ANOVA Analysis of Variance APCD Air Pollution Control Division ARLSORD Air ...
Mercury In Soils Of The Long Valley, California, Geothermal System...
Open Energy Information (Open El) [EERE & EIA]
Additional samples were collected in an analysis of variance design to evaluate natural variability in soil composition with sampling interval distance. The primary...
A Post-Monte-Carlo Sensitivity Analysis Code
Energy Science and Technology Software Center (OSTI)
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Uncertainty quantification for evaluating impacts of caprock...
Office of Scientific and Technical Information (OSTI)
Generalized cross-validation and analysis of variance methods were used to quantitatively ... and are believed can partially reflect the risk of fault reactivation and seismicity. ...
TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...
Office of Scientific and Technical Information (OSTI)
and variance that was accurate within for all variables except atmospheric pressure wind speed and precipitation Correlations between downscaled output and the expected...
"Title","Creator/Author","Publication Date","OSTI Identifier...
Office of Scientific and Technical Information (OSTI)
and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected...
Evaluation of three lidar scanning strategies for turbulence measurements
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas
2016-05-03
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
Evaluation of three lidar scanning strategies for turbulence measurements
Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.
2015-11-24
Several errors occur when a traditional Doppler-beam swinging (DBS) or velocityazimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates somemoreof the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.less
McKee, Rodney A.; Walker, Frederick J.
2003-11-25
A crystalline oxide-on-semiconductor structure and a process for constructing the structure involves a substrate of silicon, germanium or a silicon-germanium alloy and an epitaxial thin film overlying the surface of the substrate wherein the thin film consists of a first epitaxial stratum of single atomic plane layers of an alkaline earth oxide designated generally as (AO).sub.n and a second stratum of single unit cell layers of an oxide material designated as (A'BO.sub.3).sub.m so that the multilayer film arranged upon the substrate surface is designated (AO).sub.n (A'BO.sub.3).sub.m wherein n is an integer repeat of single atomic plane layers of the alkaline earth oxide AO and m is an integer repeat of single unit cell layers of the A'BO.sub.3 oxide material. Within the multilayer film, the values of n and m have been selected to provide the structure with a desired electrical structure at the substrate/thin film interface that can be optimized to control band offset and alignment.
Hanford Site performance summary -- EM funded programs, July 1995
Schultz, E.A.
1995-07-01
Performance data for July 1995 reflects a 4% unfavorable schedule variance and is an improvement over June 1995. The majority of the behind schedule condition is attributed to EM-30, (Office of Waste Management). The majority of the EM-30 schedule variance is associated with the Tank Waste Remediation System (TWRS) Program. The TWRS schedule variance is attributed to the delay in obtaining key decision 0 (KD-0) for Project W-314, ``Tank Farm Restoration and Safe Operations`` and the Multi-Function Waste Tank Facility (MWTF) workscope still being a part of the baseline. Baseline Change Requests (BCRs) are in process rebaselining Project W-314 and deleting the MWTF from the TWRS baseline. Once the BCR`s are approved and implemented, the overall schedule variance will be reduced to $15.0 million. Seventy-seven enforceable agreement milestones were scheduled FYTD. Seventy-one (92%) of the seventy-seven were completed on or ahead of schedule, two were completed late and four are delinquent. Performance data reflects a continued significant favorable cost variance of $124.3 million (10%). The cost variance is attributed to process improvements/efficiencies, elimination of low-value work, workforce reductions and is expected to continue for the remainder of this fiscal year. A portion of the cost variance is attributed to a delay in billings which should self-correct by fiscal year-end.
Interpretation of Geoelectric Structure at Hululais Prospect Area, South Sumatra
Mulyadi
1995-01-01
Schlumberger resistivity surveys were conducted in 1993 as part of a combined geological, geophysical and geological program to investigate a geothermal prospect in the Hululais area, Southern Sumatra. These resistivity data resolved the upper conductive layer and were interpreted to define the shallow extent of a possible geothermal system. A follow-up magnetotelluric (MT) survey was carried out to probe deeper than the dc resistivity survey results achieved. However, the resistive sub-stratum below the conductive layer was still poorly resolved. Possible reasons for this include a preferential channeling of the telluric current within the thick shallow very conductive layer, thus limiting the penetration depth of the magnetotelluric signals and poor resolution due to high noise levels caused by significant rain and sferics.
Hsu, Bertrand D.; Leonard, Gary L.
1988-01-01
A fuel injection system particularly adapted for injecting coal slurry fuels at high pressures includes an accumulator-type fuel injector which utilizes high-pressure pilot fuel as a purging fluid to prevent hard particles in the fuel from impeding the opening and closing movement of a needle valve, and as a hydraulic medium to hold the needle valve in its closed position. A fluid passage in the injector delivers an appropriately small amount of the ignition-aiding pilot fuel to an appropriate region of a chamber in the injector's nozzle so that at the beginning of each injection interval the first stratum of fuel to be discharged consists essentially of pilot fuel and thereafter mostly slurry fuel is injected.
Detailed Studies of Hydrocarbon Radicals: C2H Dissociation
Wittig, Curt
2014-10-06
A novel experimental technique was examined whose goal was the ejection of radical species into the gas phase from a platform (film) of cold non-reactive material. The underlying principle was one of photo-initiated heat release in a stratum that lies below a layer of CO2 or a layer of amorphous solid water (ASW) and CO2. A molecular precursor to the radical species of interest is deposited near or on the film's surface, where it can be photo-dissociated. It proved unfeasible to avoid the rampant formation of fissures, as opposed to large "flakes." This led to many interesting results, but resulted in our aborting the scheme as a means of launching cold C2H radical into the gas phase. A journal article resulted that is germane to astrophysics but not combustion chemistry.
Solar and Wind Easements & Local Option Rights Laws
Minnesota law also allows local zoning boards to restrict development for the purpose of protecting access to sunlight. In addition, zoning bodies may create variances in zoning rules in...
August 2012 Electrical Safety Occurrences
was the path of the light circuit as depicted on the site map. The locate did give a true signal of depth and variance of an underground utility. When the excavation, which was...
Microsoft PowerPoint - Snippet 1.4 EVMS Stage 2 Surveillance...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... standard readable format (e.g., X12, XML format), EVMS monthly reports; EVM variance ... standard readable format (e.g., X12, XML format); risk management plans; the EVM ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
spread, the energy spectrum of atoms scattered by angle 1 is approximately Gaussian, with a variance and a centroid E c given by 2 2E 0 T i 2 , E c...
Model-Based Sampling and Inference
U.S. Energy Information Administration (EIA) (indexed site)
... Sarndal, C.-E., Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer- Verlag. Steel, P.M. and Shao, J. (1997), "Estimation of Variance Due to Imputation in ...
ARM - Evaluation Product - MICROBASE Ensemble Data Products ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
This data set is processed with a variance-based method Chen et al., 2014 that enables a ... This dataset facilitates objective validation of climate models against cloud retrievals ...
... The following chart represents the variance between prime ... Solutions Hanford 200 87 287 Battelle Memorial Institute PNNL 27 114 141 UT-Battelle ORNL 41 161 202 Bechtel Jacobs ...
Boundary Layer Cloud Turbulence Characteristics
U.S. Department of Energy (DOE) all webpages (Extended Search)
Boundary Layer Cloud Turbulence Characteristics Virendra Ghate Bruce Albrecht Parameter Observational Readiness (/10) Modeling Need (/10) Cloud Boundaries 9 9 Cloud Fraction Variance Skewness Up/Downdraft coverage Dominant Freq. signal Dissipation rate ??? Observation-Modeling Interface
Solar Water Heating Requirement for New Residential Construction
Office of Energy Efficiency and Renewable Energy (EERE)
As of January 1, 2010, building permits may not be issued for new single-family homes that do not include a SWH system. The state energy resources coordinator may provide a variance for this...
Posters A Stratiform Cloud Parameterization for General Circulation...
U.S. Department of Energy (DOE) all webpages (Extended Search)
P(w) is the probability distribution of vertical velocity, determined from the predicted mean and variance of vertical velocity. Application to a Single-Column Model To test the...
U.S. Department of Energy (DOE) all webpages (Extended Search)
12. 13. CODE Total Contract Variance Labor Reporting Period Cumulative to Date Balance c. e. a. Subse- quent Reporting Period Total of Fiscal Year (1) (2) (3) d. Subse-...
Environmental Compliance Performance Scorecard ￢ﾀﾓ Third...
Office of Environmental Management (EM)
... MILESTONE FIELD ID MILESTONE NAME MILESTONE DESCRIPTION VARIANCE NARRATIVE FORECAST DATE ACTUAL DATE ARRA Project: N ARRA Project: N ID-0030B.C1-027 OU 3-14 Draft Phase II 90% ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
... RL-0041 B Reactor Cost variance -0.2: Subcontract cost higher than planned YTD, still forecasting a slight under run at year-end. Site Business Management Linda Pickard, Vice ...
Fuel cell stack monitoring and system control
Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.
2005-01-25
A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell. Other polarization curves may be generated and used for fuel cell stack monitoring based on different operating pressures, temperatures, hydrogen quantities.
Factors Controlling The Geochemical Evolution Of Fumarolic Encrustatio...
Open Energy Information (Open El) [EERE & EIA]
Smokes (VTTS). The six-factor solution model explains a large proportion (low of 74% for Ni to high of 99% for Si) of the individual element data variance. Although the primary...
Gate fidelity fluctuations and quantum process invariants
Magesan, Easwar; Emerson, Joseph [Institute for Quantum Computing and Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Blume-Kohout, Robin [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2011-07-15
We characterize the quantum gate fidelity in a state-independent manner by giving an explicit expression for its variance. The method we provide can be extended to calculate all higher order moments of the gate fidelity. Using these results, we obtain a simple expression for the variance of a single-qubit system and deduce the asymptotic behavior for large-dimensional quantum systems. Applications of these results to quantum chaos and randomized benchmarking are discussed.
Veil, J.A.; VanKuiken, J.C.; Folga, S.; Gillette, J.L.
1993-01-01
Many power plants discharge large volumes of cooling water. In some cases, the temperature of the discharge exceeds state thermal requirements. Section 316(a) of the Clean Water Act (CWA) allows a thermal discharger to demonstrate that less stringent thermal effluent limitations would still protect aquatic life. About 32% of the total steam electric generating capacity in the United States operates under Section 316(a) variances. In 1991, the US Senate proposed legislation that would delete Section 316(a) from the CWA. This study, presented in two companion reports, examines how this legislation would affect the steam electric power industry. This report quantitatively and qualitatively evaluates the energy and environmental impacts of deleting the variance. No evidence exists that Section 316(a) variances have caused any widespread environmental problems. Conversion from once-through cooling to cooling towers would result in a loss of plant output of 14.7-23.7 billion kilowatt-hours. The cost to make up the lost energy is estimated at $12.8-$23.7 billion (in 1992 dollars). Conversion to cooling towers would increase emission of pollutants to the atmosphere and water loss through evaporation. The second report describes alternatives available to plants that currently operate under the variance and estimates the national cost of implementing such alternatives. Little justification has been found for removing the 316(a) variance from the CWA.
Influential input classification in probabilistic multimedia models
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.; Geng, Shu
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions one should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.
Variation and correlation of hydrologic properties
Wang, J.S.Y.
1991-06-01
Hydrological properties vary within a given geological formation and even more so among different soil and rock media. The variance of the saturated permeability is shown to be related to the variance of the pore-size distribution index of a given medium by a simple equation. This relationship is deduced by comparison of the data from Yucca Mountain, Nevada (Peters et al., 1984), Las Cruces, New Mexico (Wierenga et al., 1989), and Apache Leap, Arizona (Rasmussen et al., 1990). These and other studies in different soils and rocks also support the Poiseuille-Carmen relationship between the mean value of saturated permeability and the mean value of capillary radius. Correlations of the mean values and variances between permeability and pore-geometry parameters can lead us to better quantification of heterogeneous flow fields and better understanding of the scaling laws of hydrological properties.
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
Water Vapor Turbulence Profiles in Stationary Continental Convective Mixed Layers
Turner, D. D.; Wulfmeyer, Volker; Berg, Larry K.; Schween, Jan
2014-10-08
The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program’s Raman lidar at the ARM Southern Great Plains (SGP) site in north-central Oklahoma has collected water vapor mixing ratio (q) profile data more than 90% of the time since October 2004. Three hundred (300) cases were identified where the convective boundary layer was quasi-stationary and well-mixed for a 2-hour period, and q mean, variance, third order moment, and skewness profiles were derived from the 10-s, 75-m resolution data. These cases span the entire calendar year, and demonstrate that the q variance profiles at the mixed layer (ML) top changes seasonally, but is more related to the gradient of q across the interfacial layer. The q variance at the top of the ML shows only weak correlations (r < 0.3) with sensible heat flux, Deardorff convective velocity scale, and turbulence kinetic energy measured at the surface. The median q skewness profile is most negative at 0.85 zi, zero at approximately zi, and positive above zi, where zi is the depth of the convective ML. The spread in the q skewness profiles is smallest between 0.95 zi and zi. The q skewness at altitudes between 0.6 zi and 1.2 zi is correlated with the magnitude of the q variance at zi, with increasingly negative values of skewness observed lower down in the ML as the variance at zi increases, suggesting that in cases with larger variance at zi there is deeper penetration of the warm, dry free tropospheric air into the ML.
Gajjar, Rachna M.; Kasting, Gerald B.
2014-11-15
The overall goal of this research was to further develop and improve an existing skin diffusion model by experimentally confirming the predicted absorption rates of topically-applied volatile organic compounds (VOCs) based on their physicochemical properties, the skin surface temperature, and the wind velocity. In vitro human skin permeation of two hydrophilic solvents (acetone and ethanol) and two lipophilic solvents (benzene and 1,2-dichloroethane) was studied in Franz cells placed in a fume hood. Four doses of each {sup 14}C-radiolabed compound were tested — 5, 10, 20, and 40 μL cm{sup −2}, corresponding to specific doses ranging in mass from 5.0 to 63 mg cm{sup −2}. The maximum percentage of radiolabel absorbed into the receptor solutions for all test conditions was 0.3%. Although the absolute absorption of each solvent increased with dose, percentage absorption decreased. This decrease was consistent with the concept of a stratum corneum deposition region, which traps small amounts of solvent in the upper skin layers, decreasing the evaporation rate. The diffusion model satisfactorily described the cumulative absorption of ethanol; however, values for the other VOCs were underpredicted in a manner related to their ability to disrupt or solubilize skin lipids. In order to more closely describe the permeation data, significant increases in the stratum corneum/water partition coefficients, K{sub sc}, and modest changes to the diffusion coefficients, D{sub sc}, were required. The analysis provided strong evidence for both skin swelling and barrier disruption by VOCs, even by the minute amounts absorbed under these in vitro test conditions. - Highlights: • Human skin absorption of small doses of VOCs was measured in vitro in a fume hood. • The VOCs tested were ethanol, acetone, benzene and 1,2-dichloroethane. • Fraction of dose absorbed for all compounds at all doses tested was less than 0.3%. • The more aggressive VOCs absorbed at higher levels than
Statistical Analysis of Tank 5 Floor Sample Results
Shine, E. P.
2013-01-31
Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements
SUPERIMPOSED MESH PLOTTING IN MCNP
J. HENDRICKS
2001-02-01
The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Shukla, K. K.; Phanikumar, D. V.; Newsom, Rob K.; Kumar, Niranjan; Ratnam, Venkat; Naja, M.; Singh, Narendra
2014-03-01
A Doppler lidar was installed at Manora Peak, Nainital (29.4 N; 79.2 E, 1958 amsl) to estimate mixing layer height for the first time by using vertical velocity variance as basic measurement parameter for the period September-November 2011. Mixing layer height is found to be located ~0.57 +/- 0.1and 0.45 +/- 0.05km AGL during day and nighttime, respectively. The estimation of mixing layer height shows good correlation (R>0.8) between different instruments and with different methods. Our results show that wavelet co-variance transform is a robust method for mixing layer height estimation.
Portable measurement system for soil resistivity and application to Quaternary clayey sediment
Nakagawa, Koichi; Morii, Takeo
1999-07-01
A simple device to measure electrical resistivity has been developed for field and laboratory use. The measurement system comprises a probe unit, current wave generator, amplified, A/D converter, data acquisition unit with RS-232C interface and notebook personal computer. The system is applicable to soils and soft rocks as long as the probe needles can pierce into them. Frequency range of the measurement system extends from 100 Hz to 10 MHz. The total error of the system is less than 5%. In situ measurements of the resistivity and shear resistance by means of pocket-sized penetrometer were applied to Pleistocene clayey beds. Some laboratory tests were also conducted to examine the interpretation of the in situ resistivity. Marine and non-marine clayey sediments are different in their resistivities of the stratum by in situ test and the clay suspension sampled from the strata. Physical and mechanical properties were compared with the resistivity and general relationships among them were explored to clarify the characteristics of inter-particle bonding. Some possible mechanism regarding the peculiar weathering of clayey sediment or mudstone beds is discussed from the viewpoint of physico-chemical process, which is conspicuous especially near the ground surface.
Furth, A.J.; Burke, G.K.; Deutsch, W.L. Jr.
1997-12-31
The City of Philadelphia`s Division of Aviation (DOA) has begun construction of a new commuter runway, designated as Runway 8-26, at the Philadelphia International Airport. A portion of this runway will be constructed over a former Superfund site known as the Enterprise Avenue Landfill, which for many years was used to dispose of solid waste incinerator ash and other hazardous materials. The site was clay capped in the 1980`s, but in order for the DOA to use the site, additional remediation was needed to meet US EPA final closure requirements. One component of the closure plan included installation of a low permeability horizontal barrier above a very thin (approximately 0.61 to 0.91 meters) natural clay stratum which underlies an approximately 1020 m{sup 2} area of the landfill footprint so as to insure that a minimum 1.52 meter thick low permeability barrier exists beneath the entire 150,000 m{sup 2} landfill. The new barrier was constructed using jet grouting techniques to achieve remote excavation and replacement of the bottom 0.91 meters of the waste mass with a low permeability grout. The grout was formulated to meet the low permeability, low elastic modulus and compressive strength requirements of the project design. This paper will discuss the advantages of using jet grouting for the work and details the development of the grout mixture, modeling of the grout zone under load, field construction techniques, performance monitoring and verification testing.
Conversion of borehole Stoneley waves to channel waves in coal
Johnson, P.A.; Albright, J.N.
1987-01-01
Evidence for the mode conversion of borehole Stoneley waves to stratigraphically guided channel waves was discovered in data from a crosswell acoustic experiment conducted between wells penetrating thin coal strata located near Rifle, Colorado. Traveltime moveout observations show that borehole Stoneley waves, excited by a transmitter positioned at substantial distances in one well above and below a coal stratum at 2025 m depth, underwent partial conversion to a channel wave propagating away from the well through the coal. In an adjacent well the channel wave was detected at receiver locations within the coal, and borehole Stoneley waves, arising from a second partial conversion of channel waves, were detected at locations above and below the coal. The observed channel wave is inferred to be the third-higher Rayleigh mode based on comparison of the measured group velocity with theoretically derived dispersion curves. The identification of the mode conversion between borehole and stratigraphically guided waves is significant because coal penetrated by multiple wells may be detected without placing an acoustic transmitter or receiver within the waveguide. 13 refs., 6 figs., 1 tab.
Arinbasarov, M.U.; Murygina, V.P.; Mats, A.A.
1995-12-31
The pilot area of the Vyngapour oil field allotted for MIOR tests contains three injection and three producing wells. These wells were treated in summer 1993 and 1994. Before, during, and after MIOR treatments on the pilot area the chemical compounds of injected and formation waters were studied, as well as the amount and species of microorganisms entering the stratum with the injected water and indigenous bacteria presented in bottomhole zones of the wells. The results of monitoring showed that the bottomhole zone of the injection well already had biocenosis of heterotrophic, hydrocarbon-oxidizing, methanogenic, and sulfate-reducing bacteria, which were besides permanently introduced into the reservoir during the usual waterflooding. The nutritious composition activated vital functions of all bacterial species presented in the bottomhole zone of the injection well. The formation waters from producing wells showed the increase of the content of nitrate, sulfate, phosphate, and bicarbonate ions by the end of MIOR. The amount of hydrocarbon-oxidizing bacteria in formation waters of producing wells increased by one order. The chemical and biological monitoring revealed the activation of the formation microorganisms, but no transport of food industry waste bacteria through the formation from injection to producing wells was found.
Oil field experiments of microbial improved oil recovery in Vyngapour, West Siberia, Russia
Murygina, V.P.; Mats, A.A.; Arinbasarov, M.U.; Salamov, Z.Z.; Cherkasov, A.B.
1995-12-31
Experiments on microbial improved oil recovery (MIOR) have been performed in the Vyngapour oil field in West Siberia for two years. Now, the product of some producing wells of the Vyngapour oil field is 98-99% water cut. The operation of such wells approaches an economic limit. The nutritious composition containing local industry wastes and sources of nitrogen, phosphorus and potassium was pumped into an injection well on the pilot area. This method is called {open_quotes}nutritional flooding.{close_quotes} The mechanism of nutritional flooding is based on intensification of biosynthesis of oil-displacing metabolites by indigenous bacteria and bacteria from food industry wastes in the stratum. 272.5 m{sup 3} of nutritious composition was introduced into the reservoir during the summer of 1993, and 450 m3 of nutritious composition-in 1994. The positive effect of the injections in 1993 showed up in 2-2.5 months and reached its maximum in 7 months after the injections were stopped. By July 1, 1994, 2,268.6 tons of oil was produced over the base variant, and the simultaneous water extraction reduced by 33,902 m{sup 3} as compared with the base variant. The injections in 1994 were carried out on the same pilot area.
North-South non-Gaussian asymmetry in Planck CMB maps
Bernui, A.; Oliveira, A.F.; Pereira, T.S. E-mail: adhimar@unifei.edu.br
2014-10-01
We report the results of a statistical analysis performed with the four foreground-cleaned Planck maps by means of a suitably defined local-variance estimator. Our analysis shows a clear dipolar structure in Planck's variance map pointing in the direction (l,b)?(220,-32), thus consistent with the North-South asymmetry phenomenon. Surprisingly, and contrary to previous findings, removing the CMB quadrupole and octopole makes the asymmetry stronger. Our results show a maximal statistical significance, of 98.1% CL, in the scales ranging from ?=4 to ?=500. Additionally, through exhaustive analyses of the four foreground-cleaned and individual frequency Planck maps, we find unlikely that residual foregrounds could be causing this dipole variance asymmetry. Moreover, we find that the dipole gets lower amplitudes for larger masks, evidencing that most of the contribution to the variance dipole comes from a region near the galactic plane. Finally, our results are robust against different foreground cleaning procedures, different Planck masks, pixelization parameters, and the addition of inhomogeneous real noise.
Gas-storage calculations yield accurate cavern, inventory data
Mason, R.G. )
1990-07-02
This paper discusses how determining gas-storage cavern size and inventory variance is now possible with calculations based on shut-in cavern surveys. The method is the least expensive of three major methods and is quite accurate when recorded over a period of time.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Energy dependence of multiplicity fluctuations in heavy ion collisions at 20A to 158A GeV
Alt, C.; Blume, C.; Bramm, R.; Dinkelaker, P.; Flierl, D.; Kliemant, M.; Kniege, S.; Lungwitz, B.; Mitrovski, M.; Renfordt, R.; Schuster, T.; Stock, R.; Strabel, C.; Stroebele, H.; Utvic, M.; Wetzler, A.; Anticic, T.; Kadija, K.; Nicolic, V.; Susa, T.
2008-09-15
Multiplicity fluctuations of positively, negatively, and all charged hadrons in the forward hemisphere were studied in central Pb+Pb collisions at 20A,30A,40A,80A, and 158A GeV. The multiplicity distributions and their scaled variances {omega} are presented as functions of their dependence on collision energy as well as on rapidity and transverse momentum. The distributions have bell-like shapes and their scaled variances are in the range from 0.8 to 1.2 without any significant structure in their energy dependence. No indication of the critical point in fluctuations are observed. The string-hadronic ultrarelativistic quantum molecular dynamics (UrQMD) model significantly overpredicts the mean, but it approximately reproduces the scaled variance of the multiplicity distributions. The predictions of the statistical hadron-resonance gas model obtained within the grand-canonical and canonical ensembles disagree with the measured scaled variances. The narrower than Poissonian multiplicity fluctuations measured in numerous cases may be explained by the impact of conservation laws on fluctuations in relativistic systems.
U.S. Department of Energy (DOE) all webpages (Extended Search)
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 R S T 1 Tract - Variance - Corn ment TractADN C TractFromStruct 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26...
Donor-vacancy pairs in irradiated n-Ge: A searching look at the problem
Emtsev, Vadim; Oganesyan, Gagik
2014-02-21
The present situation concerning the identification of vacancy-donor pairs in irradiated n-Ge is discussed. The challenging points are the energy states of these defects deduced from DLTS spectra. Hall effect data seem to be at variance with some important conclusions drawn from DLTS measurements. Critical points of the radiation-produced defect modeling in n-Ge are highlighted.
Sisterson, DL
2010-04-08
The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 – (ACTUAL/OPSMAX)], which accounts for unplanned downtime.
Effect of wettability on scale-up of multiphase flow from core-scale to reservoir fine-grid-scale
Chang, Y.C.; Mani, V.; Mohanty, K.K.
1997-08-01
Typical field simulation grid-blocks are internally heterogeneous. The objective of this work is to study how the wettability of the rock affects its scale-up of multiphase flow properties from core-scale to fine-grid reservoir simulation scale ({approximately} 10{prime} x 10{prime} x 5{prime}). Reservoir models need another level of upscaling to coarse-grid simulation scale, which is not addressed here. Heterogeneity is modeled here as a correlated random field parameterized in terms of its variance and two-point variogram. Variogram models of both finite (spherical) and infinite (fractal) correlation length are included as special cases. Local core-scale porosity, permeability, capillary pressure function, relative permeability functions, and initial water saturation are assumed to be correlated. Water injection is simulated and effective flow properties and flow equations are calculated. For strongly water-wet media, capillarity has a stabilizing/homogenizing effect on multiphase flow. For small variance in permeability, and for small correlation length, effective relative permeability can be described by capillary equilibrium models. At higher variance and moderate correlation length, the average flow can be described by a dynamic relative permeability. As the oil wettability increases, the capillary stabilizing effect decreases and the deviation from this average flow increases. For fractal fields with large variance in permeability, effective relative permeability is not adequate in describing the flow.
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Feingold, G.; Frisch, A.S.; Cotton, W.R.
1999-09-01
Cloud radar, microwave radiometer, and lidar remote sensing data acquired during the Atlantic Stratocumulus Transition Experiment (ASTEX) are analyzed to address the relationship between (1) drop number concentration and cloud turbulence as represented by vertical velocity and vertical velocity variance and (2) drizzle formation and cloud turbulence. Six cases, each of about 12 hours duration, are examined; three of these cases are characteristic of nondrizzling boundary layers and three of drizzling boundary layers. In all cases, microphysical retrievals are only performed when drizzle is negligible (radar reflectivity{lt}{minus}17dBZ). It is shown that for the cases examined, there is, in general, no correlation between drop concentration and cloud base updraft strength, although for two of the nondrizzling cases exhibiting more classical stratocumulus features, these two parameters are correlated. On drizzling days, drop concentration and cloud-base vertical velocity were either not correlated or negatively correlated. There is a significant positive correlation between drop concentration and mean in-cloud vertical velocity variance for both nondrizzling boundary layers (correlation coefficient r=0.45) and boundary layers that have experienced drizzle (r=0.38). In general, there is a high correlation (r{gt}0.5) between radar reflectivity and in-cloud vertical velocity variance, although one of the boundary layers that experienced drizzle exhibited a negative correlation between these parameters. However, in the subcloud region, all boundary layers that experienced drizzle exhibit a negative correlation between radar reflectivity and vertical velocity variance. {copyright} 1999 American Geophysical Union
Entropic uncertainty relations in multidimensional position and momentum spaces
Huang Yichen
2011-05-15
Commutator-based entropic uncertainty relations in multidimensional position and momentum spaces are derived, twofold generalizing previous entropic uncertainty relations for one-mode states. They provide optimal lower bounds and imply the multidimensional variance-based uncertainty principle. The article concludes with an open conjecture.
Stochastic Inversion of Seismic Amplitude-Versus-Angle Data (Stinv-AVA)
Energy Science and Technology Software Center (OSTI)
2008-04-03
The software was developed to invert seismic amplitude-versus-angle (AVA) data using a Bayesian framework. The posterior probability distribution function is sampled by effective Markov chain Monte Carlo (MCMC) methods. The software could provide not only estimates of unknown variables but also varieties of information about uncertainty, such as the mean, mode, median, variance, and even probability density of each unknown.
A low dose simulation tool for CT systems with energy integrating detectors
Zabic, Stanislav; Morton, Thomas; Brown, Kevin M.; Wang Qiu
2013-03-15
Purpose: This paper introduces a new strategy for simulating low-dose computed tomography (CT) scans using real scans of a higher dose as an input. The tool is verified against simulations and real scans and compared to other approaches found in the literature. Methods: The conditional variance identity is used to properly account for the variance of the input high-dose data, and a formula is derived for generating a new Poisson noise realization which has the same mean and variance as the true low-dose data. The authors also derive a formula for the inclusion of real samples of detector noise, properly scaled according to the level of the simulated x-ray signals. Results: The proposed method is shown to match real scans in number of experiments. Noise standard deviation measurements in simulated low-dose reconstructions of a 35 cm water phantom match real scans in a range from 500 to 10 mA with less than 5% error. Mean and variance of individual detector channels are shown to match closely across the detector array. Finally, the visual appearance of noise and streak artifacts is shown to match in real scans even under conditions of photon-starvation (with tube currents as low as 10 and 80 mA). Additionally, the proposed method is shown to be more accurate than previous approaches (1) in achieving the correct mean and variance in reconstructed images from pure-Poisson noise simulations (with no detector noise) under photon-starvation conditions, and (2) in simulating the correct noise level and detector noise artifacts in real low-dose scans. Conclusions: The proposed method can accurately simulate low-dose CT data starting from high-dose data, including effects from photon starvation and detector noise. This is potentially a very useful tool in helping to determine minimum dose requirements for a wide range of clinical protocols and advanced reconstruction algorithms.
Dimensionality and noise in energy selective x-ray imaging
Alvarez, Robert E.
2013-11-15
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 10{sup 3}. With the soft tissue component, it is 2.7 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases
Clock Agreement Among Parallel Supercomputer Nodes
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Jones, Terry R.; Koenig, Gregory A.
This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B.; Meador, Richard J.; Sisk, Daniel R.; Hatley, Darrel D.; Brown, Daryl R.; Keibel, Gary R.; Gowri, Krishnan; Reyes-Spindola, Jorge F.; Adams, Kevin J.; Yates, Kenneth R.; Eschbach, Elizabeth J.; Stratton, Rex C.
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Resonant activation in a colored multiplicative thermal noise driven closed system
Ray, Somrita; Bag, Bidhan Chandra; Mondal, Debasish
2014-05-28
In this paper, we have demonstrated that resonant activation (RA) is possible even in a thermodynamically closed system where the particle experiences a random force and a spatio-temporal frictional coefficient from the thermal bath. For this stochastic process, we have observed a hallmark of RA phenomena in terms of a turnover behavior of the barrier-crossing rate as a function of noise correlation time at a fixed noise variance. Variance can be fixed either by changing temperature or damping strength as a function of noise correlation time. Our another observation is that the barrier crossing rate passes through a maximum with increase in coupling strength of the multiplicative noise. If the damping strength is appreciably large, then the maximum may disappear. Finally, we compare simulation results with the analytical calculation. It shows that there is a good agreement between analytical and numerical results.
Clock Agreement Among Parallel Supercomputer Nodes
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Jones, Terry R.; Koenig, Gregory A.
2014-04-30
This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.
Sparse matrix transform for fast projection to reduced dimension
Theiler, James P; Cao, Guangzhi; Bouman, Charles A
2010-01-01
We investigate three algorithms that use the sparse matrix transform (SMT) to produce variance-maximizing linear projections to a lower-dimensional space. The SMT expresses the projection as a sequence of Givens rotations and this enables computationally efficient implementation of the projection operator. The baseline algorithm uses the SMT to directly approximate the optimal solution that is given by principal components analysis (PCA). A variant of the baseline begins with a standard SMT solution, but prunes the sequence of Givens rotations to only include those that contribute to the variance maximization. Finally, a simpler and faster third algorithm is introduced; this also estimates the projection operator with a sequence of Givens rotations, but in this case, the rotations are chosen to optimize a criterion that more directly expresses the dimension reduction criterion.
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W.
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Self-Calibrated Cluster Counts as a Probe of Primordial Non-Gaussianity
Oguri, Masamune; /KIPAC, Menlo Park
2009-05-07
We show that the ability to probe primordial non-Gaussianity with cluster counts is drastically improved by adding the excess variance of counts which contains information on the clustering. The conflicting dependences of changing the mass threshold and including primordial non-Gaussianity on the mass function and biasing indicate that the self-calibrated cluster counts well break the degeneracy between primordial non-Gaussianity and the observable-mass relation. Based on the Fisher matrix analysis, we show that the count variance improves constraints on f{sub NL} by more than an order of magnitude. It exhibits little degeneracy with dark energy equation of state. We forecast that upcoming Hyper Suprime-cam cluster surveys and Dark Energy Survey will constrain primordial non-Gaussianity at the level {sigma}(f{sub NL}) {approx} 8, which is competitive with forecasted constraints from next-generation cosmic microwave background experiments.
Church, J; Slaughter, D; Norman, E; Asztalos, S; Biltoft, P
2007-02-07
Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate, including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.
Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities
Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.; Salas, E.; Martin, N.
2008-01-15
The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has been included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.
Williams, Paul T.
2002-12-21
Context and Objective: Vigorous exercise, alcohol and weight loss are all known to increase HDL-cholesterol, however, it is not known whether these interventions raise low HDL as effectively as has been demonstrated for normal HDL. Design: Physician-supplied medical data from 7,288 male and 2,359 female runners were divided into five strata according to their self-reported usual running distance, reported alcohol intake, body mass index (BMI) or waist circumference. Within each stratum, the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles for HDL-cholesterol were then determined. Bootstrap resampling of least-squares regression was applied to determine the cross-sectional relationships between these factors and each percentile of the HDL-cholesterol distribution. Results: In both sexes, the rise in HDL-cholesterol per unit of vigorous exercise or alcohol intake was at least twice as great at the 95th percentile as at the 5th percentile of the HDL-distribution. There was also a significant graded increase in the slopes relating exercise (km run) and alcohol intake to HDL between the 5th and the 95th percentile. Men's HDL-cholesterol decreased in association with fatness (BMI and waist circumference) more sharply at the 95th than at the 5th percentile of the HDL-distribution. Conclusions: Although exercise, alcohol and adiposity were all related to HDL-cholesterol, the elevation in HDL per km run or ounce of alcohol consumed, and reduction in HDL per kg of body weight (men only), was least when HDL was low and greatest when HDL was high. These cross-sectional relationships support the hypothesis that men and women who have low HDL-cholesterol will be less responsive to exercise and alcohol (and weight loss in men) as compared to those who have high HDL-cholesterol.
Drumheller, Douglas Schaeffer; Kuszmaul, Scott S.
2003-08-01
Broadcasting messages through the earth is a daunting task. Indeed, broadcasting a normal telephone conversion through the earth by wireless means is impossible with todays technology. Most of us don't care, but some do. Industries that drill into the earth need wireless communication to broadcast navigation parameters. This allows them to steer their drill bits. They also need information about the natural formation that they are drilling. Measurements of parameters such as pressure, temperature, and gamma radiation levels can tell them if they have found a valuable resource such as a geothermal reservoir or a stratum bearing natural gas. Wireless communication methods are available to the drilling industry. Information is broadcast via either pressure waves in the drilling fluid or electromagnetic waves in the earth and well tubing. Data transmission can only travel one way at rates around a few baud. Given that normal Internet telephone modems operate near 20,000 baud, these data rates are truly very slow. Moreover, communication is often interrupted or permanently blocked by drilling conditions or natural formation properties. Here we describe a tool that communicates with stress waves traveling through the steel drill pipe and production tubing in the well. It's based on an old idea called Acoustic Telemetry. But what we present here is more than an idea. This tool exists, it's drilled several wells, and it works. Currently, it's the first and only acoustic telemetry tool that can withstand the drilling environment. It broadcasts one way over a limited range at much faster rates than existing methods, but we also know how build a system that can communicate both up and down wells of indefinite length.
Stevens, L.; Hooks, D; Migliori, A
2010-01-01
Elastic tensors for organic molecular crystals vary significantly among different measurements. To understand better the origin of these differences, Brillouin scattering and resonant ultrasound spectroscopy measurements were made on the same specimen for single crystal pentaerythritol tetranitrate. The results differ significantly despite mitigation of sample-dependent contributions to errors. The frequency dependence and vibrational modes probed for both measurements are discussed in relation to the observed tensor variance.
Microsoft PowerPoint - 1-Mike Grauwelman's presentation - 5.20.14_comp
Office of Legacy Management (LM)
REDEVELOPMENT USDOE MOUND FACILITY MIAMISBURG, OHIO MIKE GRAUWELMAN MOUND DEVELOPMENT CORPORATION (RETIRED) AGENDA * Mound Background * DOE History * Community Organizational Structure * Redevelopment Plan * Economic Development 2 MOUND SITE Site Information: * Founded 1948 * Manhattan Project * 306 Acres * 1.3M Sq. Ft. Buildings * Topo Variance 180' * Buried Valley Aquifer Workforce Information: * Peak Employment: 2,400 * 25% PhD's 3 MOUND MISSIONS Recent Missions * Environmental Remediation
Effect of noise on the standard mapping
Karney, C.F.F.; Rechester, A.B.; White, R.B.
1981-03-01
The effect of a small amount of noise on the standard mapping is considered. Whenever the standard mapping possesses accelerator models (where the action increases approximately linearly with time), the diffusion coefficient contains a term proportional to the reciprocal of the variance of the noise term. At large values of the stochasticity parameter, the accelerator modes exhibit a universal behavior. As a result the dependence of the diffusion coefficient on stochasticity parameter also shows some universal behavior.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Hiding in Plain Sight: a Less-Explored Secret of Secondary Organic Aerosols PI Contact: Shrivastava, M., Pacific Northwest National Laboratory Area of Research: Aerosol Properties Working Group(s): Aerosol Life Cycle Journal Reference: Shrivastava M, C Zhao, RC Easter, Y Qian, A Zelenyuk, JD Fast, Y Liu, Q Zhang, and A Guenther. 2016. "Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach." Journal of Advances in Modeling Earth Systems, ,
A Comparison of Image Quality Evaluation Techniques for Transmission X-Ray Microscopy
Bolgert, Peter J; /Marquette U. /SLAC
2012-08-31
Beamline 6-2c at Stanford Synchrotron Radiation Lightsource (SSRL) is capable of Transmission X-ray Microscopy (TXM) at 30 nm resolution. Raw images from the microscope must undergo extensive image processing before publication. Since typical data sets normally contain thousands of images, it is necessary to automate the image processing workflow as much as possible, particularly for the aligning and averaging of similar images. Currently we align images using the 'phase correlation' algorithm, which calculates the relative offset of two images by multiplying them in the frequency domain. For images containing high frequency noise, this algorithm will align noise with noise, resulting in a blurry average. To remedy this we multiply the images by a Gaussian function in the frequency domain, so that the algorithm ignores the high frequency noise while properly aligning the features of interest (FOI). The shape of the Gaussian is manually tuned by the user until the resulting average image is sharpest. To automatically optimize this process, it is necessary for the computer to evaluate the quality of the average image by quantifying its sharpness. In our research we explored two image sharpness metrics, the variance method and the frequency threshold method. The variance method uses the variance of the image as an indicator of sharpness while the frequency threshold method sums up the power in a specific frequency band. These metrics were tested on a variety of test images, containing both real and artificial noise. To apply these sharpness metrics, we designed and built a MATLAB graphical user interface (GUI) called 'Blur Master.' We found that it is possible for blurry images to have a large variance if they contain high amounts of noise. On the other hand, we found the frequency method to be quite reliable, although it is necessary to manually choose suitable limits for the frequency band. Further research must be performed to design an algorithm which
Display of Hi-Res Data | Princeton Plasma Physics Lab
U.S. Department of Energy (DOE) all webpages (Extended Search)
Display of Hi-Res Data This invention enables plotting a very large number of data points relative to the number of display pixels without losing significant information about the data. A user operating the system can set the threshold for highlighting locations on the plot that exceed a specific variance or range. Highlighted areas can be dynamically explored at the full resolution of the data. No.: M-874 Inventor(s): Eliot A Feibush
ARM - Publications: Science Team Meeting Documents
U.S. Department of Energy (DOE) all webpages (Extended Search)
Photon Pathlength Distributions Inferred from the RSS at the ARM SGP Site Min, Q. and Harrison, L.C., ASRC, SUNY at Albany Eleventh Atmospheric Radiation Measurement (ARM) Science Team Meeting A retrieval method of photon pathlength distribution using Rotating Shadowband Spectroradiometer (RSS) measurements in the oxygen A-band and water vapor band is presented. Given the resolution of the new generation RSS, we are able to retrieve both mean and variance of photon pathlength distributions.
Measuring skewness of red blood cell deformability distribution by laser ektacytometry
Nikitin, S Yu; Priezzhev, A V; Lugovtsov, A E; Ustinov, V D
2014-08-31
An algorithm is proposed for measuring the parameters of red blood cell deformability distribution based on laser diffractometry of red blood cells in shear flow (ektacytometry). The algorithm is tested on specially prepared samples of rat blood. In these experiments we succeeded in measuring the mean deformability, deformability variance and skewness of red blood cell deformability distribution with errors of 10%, 15% and 35%, respectively. (laser biophotonics)
MEASURING X-RAY VARIABILITY IN FAINT/SPARSELY SAMPLED ACTIVE GALACTIC NUCLEI
Allevato, V.; Paolillo, M.; Papadakis, I.; Pinto, C.
2013-07-01
We study the statistical properties of the normalized excess variance of variability process characterized by a ''red-noise'' power spectral density (PSD), as in the case of active galactic nuclei (AGNs). We perform Monte Carlo simulations of light curves, assuming both a continuous and a sparse sampling pattern and various signal-to-noise ratios (S/Ns). We show that the normalized excess variance is a biased estimate of the variance even in the case of continuously sampled light curves. The bias depends on the PSD slope and on the sampling pattern, but not on the S/N. We provide a simple formula to account for the bias, which yields unbiased estimates with an accuracy better than 15%. We show that the normalized excess variance estimates based on single light curves (especially for sparse sampling and S/N < 3) are highly uncertain (even if corrected for bias) and we propose instead the use of an ''ensemble estimate'', based on multiple light curves of the same object, or on the use of light curves of many objects. These estimates have symmetric distributions, known errors, and can also be corrected for biases. We use our results to estimate the ability to measure the intrinsic source variability in current data, and show that they could also be useful in the planning of the observing strategy of future surveys such as those provided by X-ray missions studying distant and/or faint AGN populations and, more in general, in the estimation of the variability amplitude of sources that will result from future surveys such as Pan-STARRS and LSST.
Statistical assessment of Monte Carlo distributional tallies
Kiedrowski, Brian C; Solomon, Clell J
2010-12-09
Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Sensitivity of Satellite-Retrieved Cloud Properties to the Effective Variance of Cloud Droplet Size Distribution R.F. Arduini Science Applications International Corporation Hampton, Virginia P. Minnis and W.L. Smith, Jr. National Aeronautics and Space Administration Langley Research Center Hampton, Virginia J.K. Ayers and M.M. Khaiyer Analytical Services and Materials, Inc. P. Heck Coorperative Institute for Mesoscale Meteorological Studies/ University of Wisconsin-Madison Madison, Wisconsin
2015 Annual Merit Review, Vehicle Technologies Office
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
numerical evaluation of each project within each subprogram area and a comparison to the other projects within the subprogram area necessitates a statistical comparison of the projects utilizing specific criteria. For each project, a representative set of experts in the project's field was selected to evaluate the project based upon the criteria indicated in the Introduction. Each evaluation criterion's sample mean and variance were calculated utilizing the following formulas respectively:
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Audit Report: IG-0729 | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
9 Audit Report: IG-0729 May 25, 2006 W76 Life Extension Project The National Nuclear Security Administration (NNSA) is and at risk of not achieving the first production unit for the W76 refurbishment within its intended scope, schedule, and cost parameters. In particular, we found that NNSA (1) reduced the scope of activities planned to support final design and production decisions; (2) delayed tests and production related milestones; and, (3) could not reconcile cost variances to supporting
Part II - The effect of data on waste behaviour: The South African waste information system
Godfrey, Linda; Scott, Dianne; Difford, Mark; Trois, Cristina
2012-11-15
Highlights: Black-Right-Pointing-Pointer This empirical study explores the relationship between data and resultant waste knowledge. Black-Right-Pointing-Pointer The study shows that 'Experience, Data and Theory' account for 54.1% of the variance in knowledge. Black-Right-Pointing-Pointer A strategic framework for Municipalities emerged from this study. - Abstract: Combining the process of learning and the theory of planned behaviour into a new theoretical framework provides an opportunity to explore the impact of data on waste behaviour, and consequently on waste management, in South Africa. Fitting the data to the theoretical framework shows that there are only three constructs which have a significant effect on behaviour, viz experience, knowledge, and perceived behavioural control (PBC). Knowledge has a significant influence on all three of the antecedents to behavioural intention (attitude, subjective norm and PBC). However, it is PBC, and not intention, that has the greatest influence on waste behaviour. While respondents may have an intention to act, this intention does not always manifest as actual waste behaviour, suggesting limited volitional control. The theoretical framework accounts for 53.7% of the variance in behaviour, suggesting significant external influences on behaviour not accounted for in the framework. While the theoretical model remains the same, respondents in public and private organisations represent two statistically significant sub-groups in the data set. The theoretical framework accounts for 47.8% of the variance in behaviour of respondents in public waste organisations and 57.6% of the variance in behaviour of respondents in private organisations. The results suggest that respondents in public and private waste organisations are subject to different structural forces that shape knowledge, intention, and resultant waste behaviour.
Cooperation: The Third Pillar of
U.S. Department of Energy (DOE) all webpages (Extended Search)
Cooperation: The Third Pillar of Evolution Dr. Martin Nowak Harvard University May 04, 2016 4:00 p.m. - Wilson Hall, One West Cooperation implies that one individual pays a cost for another to receive a benefit. Cooperation can be at variance with natural selection. Why should you help competitors? Yet cooperation is abundant in nature and an important component of all great evolutionary innovations. Cooperation can be seen as the master architect of evolution, as the third fundamental principle
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatoryepistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Parametric Behaviors of CLUBB in Simulations of Low Clouds in the Community Atmosphere Model (CAM)
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Gettelman, A.; Zhou, Tianjun
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained by the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. This study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.
Lifestyle Factors in U.S. Residential Electricity Consumption
Sanquist, Thomas F.; Orr, Heather M.; Shui, Bin; Bittner, Alvah C.
2012-03-30
A multivariate statistical approach to lifestyle analysis of residential electricity consumption is described and illustrated. Factor analysis of selected variables from the 2005 U.S. Residential Energy Consumption Survey (RECS) identified five lifestyle factors reflecting social and behavioral choices associated with air conditioning, laundry usage, personal computer usage, climate zone of residence, and TV use. These factors were also estimated for 2001 RECS data. Multiple regression analysis using the lifestyle factors yields solutions accounting for approximately 40% of the variance in electricity consumption for both years. By adding the associated household and market characteristics of income, local electricity price and access to natural gas, variance accounted for is increased to approximately 54%. Income contributed only {approx}1% unique variance to the 2005 and 2001 models, indicating that lifestyle factors reflecting social and behavioral choices better account for consumption differences than income. This was not surprising given the 4-fold range of energy use at differing income levels. Geographic segmentation of factor scores is illustrated, and shows distinct clusters of consumption and lifestyle factors, particularly in suburban locations. The implications for tailored policy and planning interventions are discussed in relation to lifestyle issues.
Entropy vs. energy waveform processing: A comparison based on the heat equation
Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; Marsh, Jon N.; Wickline, Samuel A.
2015-05-25
Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.
The annual cycle in the tropical Pacific Ocean based on assimilated ocean data from 1983 to 1992
Smith, T.M.; Chelliah, M.
1995-06-01
An analysis of the tropical Pacific Ocean from January 1983 to December 1992 is used to describe the annual cycle, with the main focus on subsurface temperature variations. Some analysis of ocean-current variations are also considered. Monthly mean fields are generated by assimilation of surface and subsurface temperature observations from ships and buoys. Comparisons with observations show that the analysis reasonably describes large-scale ocean thermal variations. Ocean currents are not assimilated and do not compare as well with observations. However, the ocean-current variations in the analysis are qualitatively similar to the known variations given by others. The authors use harmonic analysis to separate the mean annual cycle and estimate its contribution to total variance. The analysis shows that in most regions the annual cycle of subsurface thermal variations is larger than surface variations and that these variations are associated with changes in the depth of the thermocline. The annual cycle accounts for most of the total surface variance poleward of about 10{degrees} latitude but accounts for much less surface and subsurface total variance near the equator. Large subsurface annual cycles occur near 10{degrees}N associated with shifts of the intertropical convergence zone and along the equator associated with the annual cycle of equatorial wind stress. The hemispherically asymmetric depths of the 20{degrees}C isotherms indicate that the large Southern Hemisphere warm pool, which extends to near the equator, may play an important role in thermal variations on the equator. 51 refs., 18 figs., 1 tab.
Teleportation of squeezing: Optimization using non-Gaussian resources
Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio; Adesso, Gerardo
2010-12-15
We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. De Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Entropy vs. energy waveform processing: A comparison based on the heat equation
Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; Marsh, Jon N.; Wickline, Samuel A.
2015-05-25
Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be definedmore » as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.« less
Consequences of proposed changes to Clean Water Act thermal discharge requirements
Veil, J.A.; Moses, D.O.
1995-12-31
This paper summarizes three studies that examined the economic and environmental impact on the power industry of (1) limiting thermal mixing zones to 1,000 feet, and (2) eliminating the Clean Water Act (CWA) {section}316(1) variance. Both of these proposed changes were included in S. 1081, a 1991 Senate bill to reauthorize the CWA. The bill would not have provided for grandfathering plants already using the variance or mixing zones larger than 1000 feet. Each of the two changes to the existing thermal discharge requirements were independently evaluated. Power companies were asked what they would do if these two changes were imposed. Most plants affected by the proposed changes would retrofit cooling towers and some would retrofit diffusers. Assuming that all affected plants would proportionally follow the same options as the surveyed plants, the estimated capital cost of retrofitting cooling towers or diffusers at all affected plants ranges from $21.4 to 24.4 billion. Both cooling towers and diffusers exert a 1%-5.8% energy penalty on a plant`s output. Consequently, the power companies must generate additional power if they install those technologies. The estimated cost of the additional power ranges from $10 to 18.4 billion over 20 years. Generation of the extra power would emit over 8 million tons per year of additional carbon dioxide. Operation of the new cooling towers would cause more than 1.5 million gallons per minute of additional evaporation. Neither the restricted mixing zone size nor the elimination of the {section}316(1) variance was adopted into law. More recent proposed changes to the Clean Water Act have not included either of these provisions, but in the future, other Congresses might attempt to reintroduce these types of changes.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; et al
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; Wang, Hailong; Wang, Minghuai; Zhao, Chun
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.
Demonstration of Data Center Energy Use Prediction Software
Coles, Henry; Greenberg, Steve; Tschudi, William
2013-09-30
This report documents a demonstration of a software modeling tool from Romonet that was used to predict energy use and forecast energy use improvements in an operating data center. The demonstration was conducted in a conventional data center with a 15,500 square foot raised floor and an IT equipment load of 332 kilowatts. It was cooled using traditional computer room air handlers and a compressor-based chilled water system. The data center also utilized an uninterruptible power supply system for power conditioning and backup. Electrical energy monitoring was available at a number of locations within the data center. The software modeling tool predicted the energy use of the data center?s cooling and electrical power distribution systems, as well as electrical energy use and heat removal for the site. The actual energy used by the computer equipment was recorded from power distribution devices located at each computer equipment row. The model simulated the total energy use in the data center and supporting infrastructure and predicted energy use at energy-consuming points throughout the power distribution system. The initial predicted power levels were compared to actual meter readings and were found to be within approximately 10 percent at a particular measurement point, resulting in a site overall variance of 4.7 percent. Some variances were investigated, and more accurate information was entered into the model. In this case the overall variance was reduced to approximately 1.2 percent. The model was then used to predict energy use for various modification opportunities to the data center in successive iterations. These included increasing the IT equipment load, adding computer room air handler fan speed controls, and adding a water-side economizer. The demonstration showed that the software can be used to simulate data center energy use and create a model that is useful for investigating energy efficiency design changes.
ENTROPY VS. ENERGY WAVEFORM PROCESSING: A COMPARISON ON THE HEAT EQUATION
Hughes, Michael S.; McCarthy, John; Bruillard, Paul J.; Marsh, Jon N.; Wicklines, Samuel A.
2015-05-25
Virtually all modern imaging devices function by collecting either electromagnetic or acoustic backscattered waves and using the energy carried by these waves to determine pixel values that build up what is basically an ”energy” picture. However, waves also carry ”informa- tion” that also may be used to compute the pixel values in an image. We have employed several measures of information, all of which are based on different forms of entropy. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods for materials characterization and medical imaging. Similar results also have been obtained with microwaves. The most sensitive information measure appears to be the joint entropy of the backscattered wave and a reference signal. A typical study is comprised of repeated acquisition of backscattered waves from a specimen that is changing slowing with acquisition time or location. The sensitivity of repeated experimental observations of such a slowly changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., observed noise). We compute the sensitivity for joint entropy and signal energy measurements assuming that noise is Gaussian and using Wiener integration to compute the required mean values and variances. These can be written as solutions to the Heat equation, which permits estimation of their magnitudes. There always exists a reference such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.
The nature and energetics of AGN-driven perturbations in the hot gas in the Perseus Cluster
Zhuravleva, I.; Churazov, E.; Arevalo, P.; Schekochihin, A. A.; Forman, W. R.; Allen, S. W.; Simionescu, A.; Sunyaev, R.; Vikhlinin, A.; Werner, N.
2016-03-07
In this paper, cores of relaxed galaxy clusters are often disturbed by AGN. Their Chandra observations revealed a wealth of structures induced by shocks, subsonic gas motions, bubbles of relativistic plasma, etc. In this paper, we determine the nature and energy content of gas fluctuations in the Perseus core by probing statistical properties of emissivity fluctuations imprinted in the soft- and hard-band X-ray images. About 80 per cent of the total variance of perturbations on ~8–70 kpc scales in the core have an isobaric nature, i.e. are consistent with subsonic displacements of the gas in pressure equilibrium with the ambientmore » medium. The observed variance translates to the ratio of energy in perturbations to thermal energy of ~13 per cent. In the region dominated by weak ‘ripples’, about half of the total variance is associated with isobaric perturbations on scales of a few tens of kpc. If these isobaric perturbations are induced by buoyantly rising bubbles, then these results suggest that most of the AGN-injected energy should first go into bubbles rather than into shocks. Using simulations of a shock propagating through the Perseus atmosphere, we found that models reproducing the observed features of a central shock have more than 50 per cent of the AGN-injected energy associated with the bubble enthalpy and only about 20 per cent is carried away with the shock. Such energy partition is consistent with the AGN-feedback model, mediated by bubbles of relativistic plasma, and supports the importance of turbulence in the cooling–heating balance.« less
Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.
2015-12-23
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the
Nole, Michael; Daigle, Hugh; Mohanty, Kishore; Cook, Ann; Hillman, Jess
2015-12-15
We have developed a 3D methane hydrate reservoir simulator to model marine methane hydrate systems. Our simulator couples highly nonlinear heat and mass transport equations and includes heterogeneous sedimentation, in-situ microbial methanogenesis, the influence of pore size contrast on solubility gradients, and the impact of salt exclusion from the hydrate phase on dissolved methane equilibrium in pore water. Using environmental parameters from Walker Ridge in the Gulf of Mexico, we first simulate hydrate formation in and around a thin, dipping, planar sand stratum surrounded by clay lithology as it is buried to 295mbsf. We find that with sufficient methane being supplied by organic methanogenesis in the clays, a 200x pore size contrast between clays and sands allows for a strong enough concentration gradient to significantly drop the concentration of methane hydrate in clays immediately surrounding a thin sand layer, a phenomenon that is observed in well log data. Building upon previous work, our simulations account for the increase in sand-clay solubility contrast with depth from about 1.6% near the top of the sediment column to 8.6% at depth, which leads to a progressive strengthening of the diffusive flux of methane with time. By including an exponentially decaying organic methanogenesis input to the clay lithology with depth, we see a decrease in the aqueous methane supplied to the clays surrounding the sand layer with time, which works to further enhance the contrast in hydrate saturation between the sand and surrounding clays. Significant diffusive methane transport is observed in a clay interval of about 11m above the sand layer and about 4m below it, which matches well log observations. The clay-sand pore size contrast alone is not enough to completely eliminate hydrate (as observed in logs), because the diffusive flux of aqueous methane due to a contrast in pore size occurs slower than the rate at which methane is supplied via organic methanogenesis
Statistical techniques for characterizing residual waste in single-shell and double-shell tanks
Jensen, L., Fluor Daniel Hanford
1997-02-13
A primary objective of the Hanford Tank Initiative (HTI) project is to develop methods to estimate the inventory of residual waste in single-shell and double-shell tanks. A second objective is to develop methods to determine the boundaries of waste that may be in the waste plume in the vadose zone. This document presents statistical sampling plans that can be used to estimate the inventory of analytes within the residual waste within a tank. Sampling plans for estimating the inventory of analytes within the waste plume in the vadose zone are also presented. Inventory estimates can be used to classify the residual waste with respect to chemical and radiological hazards. Based on these estimates, it will be possible to make decisions regarding the final disposition of the residual waste. Four sampling plans for the residual waste in a tank are presented. The first plan is based on the assumption that, based on some physical characteristic, the residual waste can be divided into disjoint strata, and waste samples obtained from randomly selected locations within each stratum. The second plan is that waste samples are obtained from randomly selected locations within the waste. The third and fourth plans are similar to the first two, except that composite samples are formed from multiple samples. Common to the four plans is that, in the laboratory, replicate analytical measurements are obtained from homogenized waste samples. The statistical sampling plans for the residual waste are similar to the statistical sampling plans developed for the tank waste characterization program. In that program, the statistical sampling plans required multiple core samples of waste, and replicate analytical measurements from homogenized core segments. A statistical analysis of the analytical data, obtained from use of the statistical sampling plans developed for the characterization program or from the HTI project, provide estimates of mean analyte concentrations and confidence intervals
U.S. Department of Energy (DOE) all webpages (Extended Search)
Seasonal Case Studies Reveal Significant Variance in Large-Scale Forcing Data Submitter: Xie, S., Lawrence Livermore National Laboratory Area of Research: General Circulation and Single Column Models/Parameterizations Working Group(s): Cloud Modeling Journal Reference: Xie, S, R.T Cederwall, M. Zhang, and J.J. Yio, Comparison of SCM and CSRM forcing data derived from the ECMWF model and from objective analysis at the ARM SGP site, J. Geophys. Res., 108(D16), 4499, doi:10.1029/2003JD003541, 2003.
Quality Work Plan Checklist and Resources - Section 1
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Quality Work Plan Checklist and Resources - Section 1 State staff can use this list of questions and related resources to help implement the WAP Quality Work Plan. Each question includes reference to where in 15-4 the guidance behind the question is found, and where in the 2015 Application Package you will describe the answers to DOE. App Section 15-4 Section Question Yes No Resources V.5.1 1 Are you on track to submit current field guides and standards, including any necessary variance
U.S. Department of Energy (DOE) all webpages (Extended Search)
COST MANAGEMENT REPORT Page of DOE F 1332.9# FORM APPROVED (11-84) OMB NO. 1910-1400 1. TITLE 2. REPORTING PERIOD 3. IDENTIFICATION NUMBER 4. PARTICIPANT NAME AND ADDRESS 5. COST PLAN DATE 6. START DATE 7. COMPLETION DATE 8. ELEMENT 9. REPORTING ELEMENT 10. ACCRUED COSTS 11. ESTIMATED ACCRUED COSTS 12. 13. CODE Total Contract Variance Labor Reporting Period Cumulative to Date Balance c. d. Fiscal e. a. Subse- quent Reporting Period Total of Fiscal Year (1) (2) (3) Years to Completion a. Actual
Uncertainty Analysis for RELAP5-3D
Aaron J. Pawel; Dr. George L. Mesina
2011-08-01
In its current state, RELAP5-3D is a 'best-estimate' code; it is one of our most reliable programs for modeling what occurs within reactor systems in transients from given initial conditions. This code, however, remains an estimator. A statistical analysis has been performed that begins to lay the foundation for a full uncertainty analysis. By varying the inputs over assumed probability density functions, the output parameters were shown to vary. Using such statistical tools as means, variances, and tolerance intervals, a picture of how uncertain the results are based on the uncertainty of the inputs has been obtained.
Gamouras, A.; Britton, M.; Khairy, M. M.; Mathew, R.; Hall, K. C.; Dalacu, D.; Poole, P.; Poitras, D.; Williams, R. L.
2013-12-16
We demonstrate the selective optical excitation and detection of subsets of quantum dots (QDs) within an InAs/InP ensemble using a SiO{sub 2}/Ta{sub 2}O{sub 5}-based optical microcavity. The low variance of the exciton transition energy and dipole moment tied to the narrow linewidth of the microcavity mode is expected to facilitate effective qubit encoding and manipulation in a quantum dot ensemble with ease of quantum state readout relative to qubits encoded in single quantum dots.
EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation
Fan, Xuetong
2000-08-20
Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.
Arbanas, Goran; Dunn, Michael E; Larson, Nancy M; Leal, Luiz C; Williams, Mark L
2012-01-01
Convergence properties of Legendre expansion of a Doppler-broadened double-differential elastic neutron scattering cross section of {sup 238}U near the 6.67 eV resonance at temperature 10{sup 3} K are studied. A variance of Legendre expansion from a reference Monte Carlo computation is used as a measure of convergence and is computed for as many as 15 terms in the Legendre expansion. When the outgoing energy equals the incoming energy, it is found that the Legendre expansion converges very slowly. Therefore, a supplementary method of computing many higher-order terms is suggested and employed for this special case.
Schilling, Oleg; Mueschke, Nicholas J.
2010-10-18
Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmoreand destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic
Initial Evidence for Self-Organized Criticality in Electric Power System Blackouts
Carreras, B.A.; Dobson, I.; Newman, D.E.; Poole, A.B.
2000-01-04
We examine correlations in a time series of electric power system blackout sizes using scaled window variance analysis and R/S statistics. The data shows some evidence of long time correlations and has Hurst exponent near 0.7. Large blackouts tend to correlate with further large blackouts after a long time interval. Similar effects are also observed in many other complex systems exhibiting self-organized criticality. We discuss this initial evidence and possible explanations for self-organized criticality in power systems blackouts. Self-organized criticality, if fully confirmed in power systems, would suggest new approaches to understanding and possibly controlling blackouts.
Modelling of volatility in monetary transmission mechanism
Dobešová, Anna; Klepáč, Václav; Kolman, Pavel; Bednářová, Petra
2015-03-10
The aim of this paper is to compare different approaches to modeling of volatility in monetary transmission mechanism. For this purpose we built time-varying parameter VAR (TVP-VAR) model with stochastic volatility and VAR-DCC-GARCH model with conditional variance. The data from three European countries are included in the analysis: the Czech Republic, Germany and Slovakia. Results show that VAR-DCC-GARCH system captures higher volatility of observed variables but main trends and detected breaks are generally identical in both approaches.
Earned Value Management System (EVMS) Corrective Action Standard Operating Procedure
This EVMS Corrective Action Standard Operating Procedure (ECASOP) serves as PM's primary reference for development of Corrective Action Requests (CARs) and Continuous Improvement Opportunities (CIOs), as well as the assessment of contractors procedures and implementation associated with Variance Analysis Reports (VARs) and Corrective Action Plans (CAPs) in accordance with the EIA-748 (current version) EVMS standard. The SOP is based on regulatory guidance and standardized processes based upon a common understanding of EVMS Industry and Government best practices for use by the Department of Energy (DOE). All information contained herein provides detailed processes to implement the requirements in DOE O 413.3 Current Version.
Transport Test Problems for Hybrid Methods Development
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Implementation of a TMP Advanced Quality Control System at a Newsprint Manufacturing Plant
Sebastien Kidd
2006-02-14
This project provided for the implementation of an advanced, model predictive multi-variant controller that works with the mill that has existing distributed control system. The method provides real time and online predictive models and modifies control actions to maximize quality and minimize energy costs. Using software sensors, the system can predict difficult-to-measure quality and process variables and make necessary process control decisions to accurately control pulp quality while minimizing electrical usage. This method of control has allowed Augusta Newsprint Company to optimize the operation of its Thermo Mechanical Pulp mill for lower energy consumption and lower pulp quality variance.
Light quasiparticles dominate electronic transport in molecular crystal field-effect transistors
Li, Z. Q.; Podzorov, V.; Sai, N.; Martin, Michael C.; Gershenson, M. E.; Di Ventra, M.; Basov, D. N.
2007-03-01
We report on an infrared spectroscopy study of mobile holes in the accumulation layer of organic field-effect transistors based on rubrene single crystals. Our data indicate that both transport and infrared properties of these transistors at room temperature are governed by light quasiparticles in molecular orbital bands with the effective masses m[small star, filled]comparable to free electron mass. Furthermore, the m[small star, filled]values inferred from our experiments are in agreement with those determined from band structure calculations. These findings reveal no evidence for prominent polaronic effects, which is at variance with the common beliefs of polaron formation in molecular solids.
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V.; Smith, Richard D.
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
In-Situ Real Time Monitoring and Control of Mold Making and Filling Processes: Final Report
Mohamed Abdelrahman; Kenneth Currie
2010-12-22
This project presents a model for addressing several objectives envisioned by the metal casting industries through the integration of research and educational components. It provides an innovative approach to introduce technologies for real time characterization of sand molds, lost foam patterns and monitoring of the mold filling process. The technology developed will enable better control over the casting process. It is expected to reduce scrap and variance in the casting quality. A strong educational component is integrated into the research plan to utilize increased awareness of the industry professional, the potential benefits of the developed technology, and the potential benefits of cross cutting technologies.
U.S. Department of Energy (DOE) all webpages (Extended Search)
1 Figure 1. Log-log plot of the variance at a given scale, vs. that scale (similar to the power spectrum or a 2nd order structure function). It shows (scale-invariant) ARM cloud liquid water path data plus two computed radiation fields, IPA ("simple theory") and MC ("better theory"). The MC curve, showing a scale break at the "radiative smoothing scale" 0 . 200-300 m for marine Sc, agrees with Landsat observations. The IPA curve depends entirely on vertical liquid
Safety criteria for organic watch list tanks at the Hanford Site
Meacham, J.E., Westinghouse Hanford
1996-08-01
This document reviews the hazards associated with the storage of organic complexant salts in Hanford Site high-level waste single- shell tanks. The results of this analysis were used to categorize tank wastes as safe, unconditionally safe, or unsafe. Sufficient data were available to categorize 67 tanks; 63 tanks were categorized as safe, and four tanks were categorized as conditionally safe. No tanks were categorized as unsafe. The remaining 82 SSTs lack sufficient data to be categorized.Historic tank data and an analysis of variance model were used to prioritize the remaining tanks for characterization.
Oxygen-induced immediate onset of the antiferromagnetic stacking in thin Cr films on Fe(001)
Berti, Giulia Brambilla, Alberto; Calloni, Alberto; Bussetti, Gianlorenzo; Finazzi, Marco; Duò, Lamberto; Ciccacci, Franco
2015-04-20
We investigated the magnetic coupling of ultra-thin Cr films grown at 600 K on a Fe(001)-p(1 × 1)O substrate by means of spin-polarized photoemission spectroscopy. Our findings show that the expected antiferromagnetic stacking of the magnetization in Cr(001) layers occurs right from the first atomic layer at the Cr/Fe interface. This is at variance with all previous observations in similar systems, prepared in oxygen-free conditions, which always reported on a delayed onset of the magnetic oscillations due to the occurrence of significant chemical alloying at the interface, which is substantially absent in our preparation.
Machine protection system for rotating equipment and method
Lakshminarasimha, Arkalgud N.; Rucigay, Richard J.; Ozgur, Dincer
2003-01-01
A machine protection system and method for rotating equipment introduces new alarming features and makes use of full proximity probe sensor information, including amplitude and phase. Baseline vibration amplitude and phase data is estimated and tracked according to operating modes of the rotating equipment. Baseline vibration and phase data can be determined using a rolling average and variance and stored in a unit circle or tracked using short term average and long term average baselines. The sensed vibration amplitude and phase is compared with the baseline vibration amplitude and phase data. Operation of the rotating equipment can be controlled based on the vibration amplitude and phase.
2009 SECTION II: HEAVY ION REACTIONS
U.S. Department of Energy (DOE) all webpages (Extended Search)
isospin dependence of the nuclear equation of state near the critical point M. Huang, A. Bonasera, Z. Chen, R. Wada, K. Hagel, J. B. Natowitz, P. K. Sahu, L. Qin, T. Keutgen, S. Kowalski, T. Materna, J. Wang, M. Barbui, C. Bottosso, and M. R. D. Rodrigues Variance of the isotope yield distribution and symmetry energy Z. Chen, S. Kowalski, M. Huang, R. Wada, T. Keutgen, K. Hagel, J. Wang, L. Qin, A novel approach to Isoscaling: the role of the order parameter m=(N-Z)/A M. Huang, Z. Chen, S.
Analysis of a magnetically trapped atom clock
Kadio, D.; Band, Y. B.
2006-11-15
We consider optimization of a rubidium atom clock that uses magnetically trapped Bose condensed atoms in a highly elongated trap, and determine the optimal conditions for minimum Allan variance of the clock using microwave Ramsey fringe spectroscopy. Elimination of magnetic field shifts and collisional shifts are considered. The effects of spin-dipolar relaxation are addressed in the optimization of the clock. We find that for the interstate interaction strength equal to or larger than the intrastate interaction strengths, a modulational instability results in phase separation and symmetry breaking of the two-component condensate composed of the ground and excited hyperfine clock levels, and this mechanism limits the clock accuracy.
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (OSTI)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less
Hickey, R.
1992-09-01
The objective of this project was to develop and test an early-warning/process control model for anaerobic sludge digestion (AD). The approach was to use batch and semi-continuously fed systems and to assemble system parameter data on a real-time basis. Specific goals were to produce a real-time early warning control model and computer code, tested for internal and external validity; to determine the minimum rate of data collection for maximum lag time to predict failure with a prescribed accuracy and confidence in the prediction; and to determine and characterize any trends in the real-time data collected in response to particular perturbations to feedstock quality. Trends in the response of trace gases carbon monoxide and hydrogen in batch experiments, were found to depend on toxicant type. For example, these trace gases respond differently for organic substances vs. heavy metals. In both batch and semi-continuously feed experiments, increased organic loading lead to proportionate increases in gas production rates as well as increases in CO and H{sub 2} concentration. An analysis of variance of gas parameters confirmed that CO was the most sensitive indicator variable by virtue of its relatively larger variance compared to the others. The other parameters evaluated including gas production, methane production, hydrogen, carbon monoxide, carbon dioxide and methane concentration. In addition, a relationship was hypothesized between gaseous CO concentration and acetate concentrations in the digester. The data from semicontinuous feed experiments were supportive.
Comfort and HVAC Performance for a New Construction Occupied Test House in Roseville, California
Burdick, A.
2013-10-01
K. Hovnanian® Homes constructed a 2,253-ft2 single-story slab-on-grade ranch house for an occupied test house (new construction) in Roseville, California. One year of monitoring and analysis focused on the effectiveness of the space conditioning system at maintaining acceptable temperature and relative humidity levels in several rooms of the home, as well as room-to-room differences and the actual measured energy consumption by the space conditioning system. In this home, the air handler unit (AHU) and ducts were relocated to inside the thermal boundary. The AHU was relocated from the attic to a mechanical closet, and the ductwork was located inside an insulated and air-sealed bulkhead in the attic. To describe the performance and comfort in the home, the research team selected representative design days and extreme days from the annual data for analysis. To ensure that temperature differences were within reasonable occupant expectations, the team followed Air Conditioning Contractors of America guidance. At the end of the monitoring period, the occupant of the home had no comfort complaints in the home. Any variance between the modeled heating and cooling energy and the actual amounts used can be attributed to the variance in temperatures at the thermostat versus the modeled inputs.
Comfort and HVAC Performance for a New Construction Occupied Test House in Roseville, California
Burdick, A.
2013-10-01
K. Hovnanian(R) Homes(R) constructed a 2,253-ft2 single-story slab-on-grade ranch house for an occupied test house (new construction) in Roseville, California. One year of monitoring and analysis focused on the effectiveness of the space conditioning system at maintaining acceptable temperature and relative humidity levels in several rooms of the home, as well as room-to-room differences and the actual measured energy consumption by the space conditioning system. In this home, the air handler unit (AHU) and ducts were relocated to inside the thermal boundary. The AHU was relocated from the attic to a mechanical closet, and the ductwork was located inside an insulated and air-sealed bulkhead in the attic. To describe the performance and comfort in the home, the research team selected representative design days and extreme days from the annual data for analysis. To ensure that temperature differences were within reasonable occupant expectations, the team followed Air Conditioning Contractors of America guidance. At the end of the monitoring period, the occupant of the home had no comfort complaints in the home. Any variance between the modeled heating and cooling energy and the actual amounts used can be attributed to the variance in temperatures at the thermostat versus the modeled inputs.
Verification of theoretically computed spectra for a point rotating in a vertical plane
Powell, D.C.; Connell, J.R.; George, R.L.
1985-03-01
A theoretical model is modified and tested that produces the power spectrum of the alongwind component of turbulence as experienced by a point rotating in a vertical plane perpendicular to the mean wind direction. The ability to generate such a power spectrum, independent of measurement, is important in wind turbine design and testing. The radius of the circle of rotation, its height above the ground, and the rate of rotation are typical for those for a MOD-OA wind turbine. Verification of this model is attempted by comparing two sets of variances that correspond to individual harmonic bands of spectra of turbulence in the rotational frame. One set of variances is calculated by integrating the theoretically generated rotational spectra; the other is calculated by integrating rotational spectra from real data analysis. The theoretical spectrum is generated by Fourier transformation of an autocorrelation function taken from von Karman and modified for the rotational frame. The autocorrelation is based on dimensionless parameters, each of which incorporates both atmospheric and wind turbine parameters. The real data time series are formed by sampling around the circle of anemometers of the Vertical Plane Array at the former MOD-OA site at Clayton, New Mexico.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
2011-01-03
Bulk Data Mover (BDM) is a high-level data transfer management tool. BDM handles the issue of large variance in file sizes and a big portion of small files by managing the file transfers with optimized transfer queue and concurrency management algorithms. For example, climate simulation data sets are characterized by large volume of files with extreme variance in file sizes. The BDN achieves high performance using a variety of techniques, including multi-thraded concurrent transfer connections, data channel caching, load balancing over multiple transfer servers, and storage i/o pre-fetching. Logging information from the BDM is collected and analyzed to study the effectiveness of the transfer management algorithms. The BDM can accept a request composed of multiple files or an entire directory. The request also contains the target site and directory where the replicated files will reside. If a directory is provided at the source, then the BDM will replicate the structure of the source directory at the target site. The BDM is capable of transferring multiple files concurrently as well as using parallel TCP streams. The optimal level of concurrency or parallel streams depends on the bandwidth capacity of the storage systems at both ends of the transfer as well as achievable bandwidth of the wide-area network. Hardware req.-PC, MAC, Multi-platform & Workstation; Software req.: Compile/version-Java 1.50_x or ablove; Type of files: source code, executable modules, installation instructions other, user guide; URL: http://sdm.lbl.gov/bdm/
Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em
2013-03-01
We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a deterministic part and a stochastic part. Numerical results verify the StratonovichEuler and ItoEuler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.
McFerran, John J.; Luiten, Andre N. [School of Physics, University of Western Australia, 35 Stirling Highway, Crawley 6009, W.A. (Australia)
2010-02-15
We demonstrate a means of increasing the signal-to-noise ratio in a Ramsey-Borde interferometer with spatially separated oscillatory fields on a thermal atomic beam. The {sup 1}S{sub 0}{r_reversible}{sup 3}P{sub 1} intercombination line in neutral {sup 40}Ca is used as a frequency discriminator, with an extended cavity diode laser at 423 nm probing the ground state population after a Ramsey-Borde sequence of 657 nm light-field interactions with the atoms. Evaluation of the instability of the Ca frequency reference is carried out by comparison with (i) a hydrogen-maser and (ii) a cryogenic sapphire oscillator. In the latter case the Ca reference exhibits a square-root {Lambda} variance of 9.2x10{sup -14} at 1 s and 2.0x10{sup -14} at 64 s. This is an order-of-magnitude improvement for optical beam frequency references, to our knowledge. The shot noise of the readout fluorescence produces a limiting square-root {Lambda} variance of 7x10{sup -14}/{radical}({tau}), highlighting the potential for improvement. This work demonstrates the feasibility of a portable frequency reference in the optical domain with 10{sup -14} range frequency instability.
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-04-25
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Piehowski, Paul D.; Petyuk, Vladislav A.; Orton, Daniel J.; Xie, Fang; Moore, Ronald J.; Ramirez Restrepo, Manuel; Engel, Anzhelika; Lieberman, Andrew P.; Albin, Roger L.; Camp, David G.; Smith, Richard D.; Myers, Amanda J.
2013-05-03
To design a robust quantitative proteomics study, an understanding of both the inherent heterogeneity of the biological samples being studied as well as the technical variability of the proteomics methods and platform is needed. Additionally, accurately identifying the technical steps associated with the largest variability would provide valuable information for the improvement and design of future processing pipelines. We present an experimental strategy that allows for a detailed examination of the variability of the quantitative LC-MS proteomics measurements. By replicating analyses at different stages of processing, various technical components can be estimated and their individual contribution to technical variability can be dissected. This design can be easily adapted to other quantitative proteomics pipelines. Herein, we applied this methodology to our label-free workflow for the processing of human brain tissue. For this application, the pipeline was divided into four critical components: Tissue dissection and homogenization (extraction), protein denaturation followed by trypsin digestion and SPE clean-up (digestion), short-term run-to-run instrumental response fluctuation (instrumental variance), and long-term drift of the quantitative response of the LC-MS/MS platform over the 2 week period of continuous analysis (instrumental stability). From this analysis, we found the following contributions to variability: extraction (72%) >> instrumental variance (16%) > instrumental stability (8.4%) > digestion (3.1%). Furthermore, the stability of the platform and its suitability for discovery proteomics studies is demonstrated.
Optimal Solar PV Arrays Integration for Distributed Generation
Omitaomu, Olufemi A; Li, Xueping
2012-01-01
Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.
A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO{sub 2}
Jones, M.H.
1999-11-24
To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO{sub 2} levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO{sub 2}-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP-072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO{sub 2}-exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO{sub 2}-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS{reg_sign} and Fortran codes to read the ASCII data file). The data files and this documentation are available without charge on a variety of media and via the Internet from CDIAC.
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
Ghosh, Somnath; Friedrich, Rainer
2015-05-15
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. The filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.
Time-variability of NO{sub x} emissions from Portland cement kilns
Walters, L.J. Jr.; May, M.S. III [PSM International, Dallas, TX (United States)] [PSM International, Dallas, TX (United States); Johnson, D.E. [Kansas State Univ., Manhattan, KS (United States). Dept. of Statistics] [Kansas State Univ., Manhattan, KS (United States). Dept. of Statistics; MacMann, R.S. [Penta Engineering, St. Louis, MO (United States)] [Penta Engineering, St. Louis, MO (United States); Woodward, W.A. [Southern Methodist Univ., Dallas, TX (United States). Dept. of Statistics] [Southern Methodist Univ., Dallas, TX (United States). Dept. of Statistics
1999-03-01
Due to the presence of autocorrelation between sequentially measured nitrogen oxide (NO{sub x}) concentrations in stack gas from portland cement kilns, the determination of the average emission rates and the uncertainty of the average has been improperly calculated by the industry and regulatory agencies. Documentation of permit compliance, establishment of permit levels, and the development and testing of control techniques for reducing NO{sub x} emissions at specific cement plants requires accurate and precise statistical estimates of parameters such as means, standard deviations, and variances. Usual statistical formulas such as for the variance of the sample mean only apply if sequential measurements of NO{sub x} emissions are independent. Significant autocorrelation of NO{sub x} emission measurements revealed that NO{sub x} concentration values measured by continuous emission monitors are not independent but can be represented by an autoregressive, moving average time series. Three orders of time-variability of NO{sub x} emission rates were determined from examination of continuous emission measurements from several cement kilns.
Matthaeus, W.H. ); Goldstein, M.L.; Roberts, D.A. )
1990-12-01
Solar wind fluctuations are commonly regarded as a superposition of MHD waves primarily in the Alven mode. These MHD fluctuations are frequently assumed to possess slab or isotropic symmetry, particularly in the development of models of the propagation of cosmic rays throughout the heliosphere. There are, however, several long-standing problems with either of these choices. One problem is that the mean free path for pitch angle scattering of cosmic rays in the heliosphere is apparently longer than can be accounted for by using either assumption about the statistical symmetry of the fluctuations. Another problem is the prediction of WKB theory that the direction of minimum variance should tend to lie along the radial direction rather than along the mean magnetic field as is observed. Motivated by laboratory plasma experiments, a series of two-dimensional MHD simulations, recent theoretical work, and extensive analyses of solar wind data, the authors suggest that there is a third possible viewpoint with potentially important implications for solar wind studies. From this perspective they suggest that solar wind fluctuations contain a subpopulation that have wave vectors nearly transverse to both the mean magnetic field and the fluctuations about the mean. For this quasi-two-dimensional component the direction of minimum variance lies along the mean magnetic field, density fluctuations are small and anticorrelated with {vert bar}B{vert bar}, the total pressure at small scales is nearly constant, and pitch angle scattering by resonant wave-particle interactions is suppressed.
Formulas for calculating the heating value of coal and coal char: development, tests, and uses
Mason, D.M.; Gandhi, K.
1980-01-01
A new five-term formula for calculating the heating value of coal from its carbon, hydrogen, sulfur and ash content was obtained by regression analysis of data on 775 samples of US coals of all ranks. The standard deviation of the calculated value from the observed value was 129 Btu/lb, compared to apparent standard deviations ranging from 178 to 229 Btu/lb obtained from the Dulong, Boie, Grummel and Davis, and Mott and Spooner formulas. An analysis of the variance of the difference between observed and calculated values obtained with the new formula on IGT coal data indicated that at least 77% is contributed by the variance of the experimental determinations; the remainder can be attributed to the effect of mineral matter and outlying experimental determinations. Application of the formula to coal oxidatively pretreated at 750/sup 0/F to destroy agglomerating properties yields a bias indicating that the heat of formation is higher than expected from elemental and ash composition by about 140 Btu/lb; this is attributed to differences in structure (bonding). The formula gives satisfactory results on higher temperature HYGAS chars, and with application of a bias correction on pretreated coal. Thus, the formula is advantageous for use in the computer modelling of coal conversion processes and for monitoring test data on coal and char.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Martin, N.G.; Nightingale, B.; Whitfield, J.B.
1994-09-01
There is much interest in the detection of quantitative trait loci (QTL) - major genes which affect quantitative phenotypes. The relationship of polymorphism at known alcohol metabolizing enzyme loci to alcohol pharmacokinetics is a good model system. The three class I alcohol dehydrogenase genes are clustered on chromosome 4 and protein electrophoresis has revealed polymorphisms at the ADH2 and ADH3 loci. While different activities of the isozymes have been demonstrated in vitro, little work has been done in trying to relate ADH polymorphism to variation in ethanol metabolism in vivo. We previously measured ethanol metabolism and psychomotor reactivity in 206 twin pairs and demonstrated that most of the repeatable variation was genetic. We have now recontacted the twins to obtain DNA samples and used PCR with allele specific primers to type the ADH2 and ADH3 polymorphisms in 337 individual twins. FISHER has been used to estimate fixed effects of typed polymorphisms simultaneously with remaining linked and unlinked genetic variance. The ADH2*1-2 genotypes metabolize ethanol faster and attain a lower peak blood alcohol concentration than the more common ADH2*1-1 genotypes, although less than 3% of the variance is accounted for. There is no effect of ADH3 genotype. However, sib-pair linkage analysis suggests that there is a linked polymorphism which has a much greater effect on alcohol metabolism that those typed here.
URBAN WOOD/COAL CO-FIRING IN THE BELLEFIELD BOILERPLANT
James T. Cobb, Jr.; Gene E. Geiger; William W. Elder III; William P. Barry; Jun Wang; Hongming Li
2001-08-21
During the third quarter, important preparatory work was continued so that the experimental activities can begin early in the fourth quarter. Authorization was awaited in response to the letter that was submitted to the Allegheny County Health Department (ACHD) seeking an R&D variance for the air permit at the Bellefield Boiler Plant (BBP). Verbal authorizations were received from the Pennsylvania Department of Environmental Protection (PADEP) for R&D variances for solid waste permits at the J. A. Rutter Company (JARC), and Emery Tree Service (ETS). Construction wood was acquired from Thompson Properties and Seven D Corporation. Forty tons of pallet and construction wood were ground to produce BioGrind Wood Chips at JARC and delivered to Mon Valley Transportation Company (MVTC). Five tons of construction wood were milled at ETS and half of the product delivered to MVTC. Discussions were held with BBP and Energy Systems Associates (ESA) about the test program. Material and energy balances on Boiler No.1 and a plan for data collection were prepared. Presentations describing the University of Pittsburgh Wood/Coal Co-Firing Program were provided to the Pittsburgh Chapter of the Pennsylvania Society of Professional Engineers, and the Upgraded Coal Interest Group and the Biomass Interest Group (BIG) of the Electric Power Research Institute (EPRI). An article describing the program appeared in the Pittsburgh Post-Gazette. An application was submitted for authorization for a Pennsylvania Switchgrass Energy and Conservation Program.
Preston, Benjamin L.; King, Anthony Wayne; Mei, Rui; Nair, Sujithkumar Surendran
2016-02-11
Agricultural enterprises are vulnerable to the effects of climate variability and change. Improved understanding of the determinants of vulnerability and adaptive capacity in agricultural systems is important for projecting and managing future climate risk. At present, three analytical tools dominate methodological approaches to understanding agroecological vulnerability to climate: process-based crop models, empirical crop models, and integrated assessment models. A common weakness of these approaches is their limited treatment of socio-economic conditions and human agency in modeling agroecological processes and outcomes. This study proposes a framework that uses spatial cluster analysis to generate regional socioecological typologies that capture geographic variance inmore » regional agricultural production and enable attribution of that variance to climatic, topographic, edaphic, and socioeconomic components. This framework was applied to historical corn production (1986-2010) in the U.S. Gulf of Mexico region as a testbed. The results demonstrate that regional socioeconomic heterogeneity is an important driving force in human dominated ecosystems, which we hypothesize, is a function of the link between socioeconomic conditions and the adaptive capacity of agricultural systems. Meaningful representation of future agricultural responses to climate variability and change is contingent upon understanding interactions among biophysical conditions, socioeconomic conditions, and human agency their incorporation in predictive models.« less
Bufoni, André Luiz
2015-09-15
Highlights: • Projects are not financially attractive without registration as CDMs. • WM benchmarks and indicators are converging and reducing in variance. • A sensitivity analysis reveal that revenue has more of an effect on the financial results. • Results indicate that an extensive database would reduce WM project risk and capital costs. • Disclosure standards would make information more comparable worldwide. - Abstract: This study illustrates the financial analyses for demonstration and assessment of additionality presented in the project design (PDD) and enclosed documents of the 431 large Clean Development Mechanisms (CDM) classified as the ‘waste handling and disposal sector’ (13) over the past ten years (2004–2014). The expected certified emissions reductions (CER) of these projects total 63.54 million metric tons of CO{sub 2}eq, where eight countries account for 311 projects and 43.36 million metric tons. All of the projects declare themselves ‘not financially attractive’ without CER with an estimated sum of negative results of approximately a half billion US$. The results indicate that WM benchmarks and indicators are converging and reducing in variance, and the sensitivity analysis reveals that revenues have a greater effect on the financial results. This work concludes that an extensive financial database with simple standards for disclosure would greatly diminish statement problems and make information more comparable, reducing the risk and capital costs of WM projects.
Characterization and estimation of permeability correlation structure from performance data
Ershaghi, I.; Al-Qahtani, M.
1997-08-01
In this study, the influence of permeability structure and correlation length on the system effective permeability and recovery factors of 2-D cross-sectional reservoir models, under waterflood, is investigated. Reservoirs with identical statistical representation of permeability attributes are shown to exhibit different system effective permeability and production characteristics which can be expressed by a mean and variance. The mean and variance are shown to be significantly influenced by the correlation length. Detailed quantification of the influence of horizontal and vertical correlation lengths for different permeability distributions is presented. The effect of capillary pressure, P{sub c1} on the production characteristics and saturation profiles at different correlation lengths is also investigated. It is observed that neglecting P{sub c} causes considerable error at large horizontal and short vertical correlation lengths. The effect of using constant as opposed to variable relative permeability attributes is also investigated at different correlation lengths. Next we studied the influence of correlation anisotropy in 2-D reservoir models. For a reservoir under five-spot waterflood pattern, it is shown that the ratios of breakthrough times and recovery factors of the wells in each direction of correlation are greatly influenced by the degree of anisotropy. In fully developed fields, performance data can aid in the recognition of reservoir anisotropy. Finally, a procedure for estimating the spatial correlation length from performance data is presented. Both the production performance data and the system`s effective permeability are required in estimating the correlation length.
Climatology of wave breaking and mixing in the Northern Hemisphere summer stratosphere
Wagner, R.E.
1999-07-02
The cause of large zonal ozone variations observed by POAM II (Polar Ozone and Aerosol Measurement II) in the Northern Hemisphere (NH) summer stratosphere between 55N-65N and 20-30 km is investigated using the United Kingdom Meteorological Office stratospheric data set with time-mean anomalies removed. This study tests the hypothesis from Hoppel et al. 1999 that breaking of westward-propagating planetary waves in the region of maximum ozone variance (RMV) induces substantial meridional transport which is responsible for the observed ozone variance. EP-flux vectors show that wave activity propagates vertically from source regions in the lower midlatitude troposphere into the stratosphere and RMV during the NH summer. In the RMV, EP-flux divergence is clearly nonzero, which means the zonal-mean zonal flow is forced by waves in this region. Close examination of individual zonal wavenumber contributions to the climatological monthly-mean EP-flux divergence shows that wavenumbers 1-5 generally account for over 90% of the forcing of the zonal-mean flow in the RMV from June to August.
Inflationary power asymmetry from primordial domain walls
Jazayeri, Sadra; Akrami, Yashar; Firouzjahi, Hassan; Solomon, Adam R.; Wang, Yi E-mail: yashar.akrami@astro.uio.no E-mail: a.r.solomon@damtp.cam.ac.uk
2014-11-01
We study the asymmetric primordial fluctuations in a model of inflation in which translational invariance is broken by a domain wall. We calculate the corrections to the power spectrum of curvature perturbations; they are anisotropic and contain dipole, quadrupole, and higher multipoles with non-trivial scale-dependent amplitudes. Inspired by observations of these multipole asymmetries in terms of two-point correlations and variance in real space, we demonstrate that this model can explain the observed anomalous power asymmetry of the cosmic microwave background (CMB) sky, including its characteristic feature that the dipole dominates over higher multipoles. We test the viability of the model and place approximate constraints on its parameters by using observational values of dipole, quadrupole, and octopole amplitudes of the asymmetry measured by a local-variance estimator. We find that a configuration of the model in which the CMB sphere does not intersect the domain wall during inflation provides a good fit to the data. We further derive analytic expressions for the corrections to the CMB temperature covariance matrix, or angular power spectra, which can be used in future statistical analysis of the model in spherical harmonic space.
Fingerprints of anomalous primordial Universe on the abundance of large scale structures
Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: abolhasani@ipm.ir E-mail: MohammadHossein.Namjoo@utdallas.edu
2014-12-01
We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that the abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A}<0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for M
Method and system for turbomachinery surge detection
Faymon, David K.; Mays, Darrell C.; Xiong, Yufei
2004-11-23
A method and system for surge detection within a gas turbine engine, comprises: measuring the compressor discharge pressure (CDP) of the gas turbine over a period of time; determining a time derivative (CDP.sub.D ) of the measured (CDP) correcting the CDP.sub.D for altitude, (CDP.sub.DCOR); estimating a short-term average of CDP.sub.DCOR.sup.2 ; estimating a short-term average of CDP.sub.DCOR ; and determining a short-term variance of corrected CDP rate of change (CDP.sub.roc) based upon the short-term average of CDP.sub.DCOR and the short-term average of CDP.sub.DCOR.sup.2. The method and system then compares the short-term variance of corrected CDP rate of change with a pre-determined threshold (CDP.sub.proc) and signals an output when CDP.sub.roc >CDP.sub.proc. The method and system provides a signal of a surge within the gas turbine engine when CDP.sub.roc remains>CDP.sub.proc for pre-determined period of time.
Mearns, L. O.; Sain, Steve; Leung, Lai-Yung R.; Bukovsky, M. S.; McGinnis, Seth; Biner, S.; Caya, Daniel; Arritt, R.; Gutowski, William; Takle, Eugene S.; Snyder, Mark A.; Jones, Richard; Nunes, A M B.; Tucker, S.; Herzmann, D.; McDaniel, Larry; Sloan, Lisa
2013-10-01
We investigate major results of the NARCCAP multiple regional climate model (RCM) experiments driven by multiple global climate models (GCMs) regarding climate change for seasonal temperature and precipitation over North America. We focus on two major questions: How do the RCM simulated climate changes differ from those of the parent GCMs and thus affect our perception of climate change over North America, and how important are the relative contributions of RCMs and GCMs to the uncertainty (variance explained) for different seasons and variables? The RCMs tend to produce stronger climate changes for precipitation: larger increases in the northern part of the domain in winter and greater decreases across a swath of the central part in summer, compared to the four GCMs driving the regional models as well as to the full set of CMIP3 GCM results. We pose some possible process-level mechanisms for the difference in intensity of change, particularly for summer. Detailed process-level studies will be necessary to establish mechanisms and credibility of these results. The GCMs explain more variance for winter temperature and the RCMs for summer temperature. The same is true for precipitation patterns. Thus, we recommend that future RCM-GCM experiments over this region include a balanced number of GCMs and RCMs.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; Liu, Ying
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.
Exploiting Genetic Variation of Fiber Components and Morphology in Juvenile Loblolly Pine
Chang, Hou-Min; Kadia, John F.; Li, Bailian; Sederoff, Ron
2005-06-30
In order to ensure the global competitiveness of the Pulp and Paper Industry in the Southeastern U.S., more wood with targeted characteristics have to be produced more efficiently on less land. The objective of the research project is to provide a molecular genetic basis for tree breeding of desirable traits in juvenile loblolly pine, using a multidisciplinary research approach. We developed micro analytical methods for determine the cellulose and lignin content, average fiber length, and coarseness of a single ring in a 12 mm increment core. These methods allow rapid determination of these traits in micro scale. Genetic variation and genotype by environment interaction (GxE) were studied in several juvenile wood traits of loblolly pine (Pinus taeda L.). Over 1000 wood samples of 12 mm increment cores were collected from 14 full-sib families generated by a 6-parent half-diallel mating design (11-year-old) in four progeny tests. Juvenile (ring 3) and transition (ring 8) for each increment core were analyzed for cellulose and lignin content, average fiber length, and coarseness. Transition wood had higher cellulose content, longer fiber and higher coarseness, but lower lignin than juvenile wood. General combining ability variance for the traits in juvenile wood explained 3 to 10% of the total variance, whereas the specific combining ability variance was negligible or zero. There were noticeable full-sib family rank changes between sites for all the traits. This was reflected in very high specific combining ability by site interaction variances, which explained from 5% (fiber length) to 37% (lignin) of the total variance. Weak individual-tree heritabilities were found for cellulose, lignin content and fiber length at the juvenile and transition wood, except for lignin at the transition wood (0.23). Coarseness had moderately high individual-tree heritabilities at both the juvenile (0.39) and transition wood (0.30). Favorable genetic correlations of volume and stem
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
Risner, Joel M; Johnson, Seth R; Remec, Igor; Bekar, Kursat B
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst s insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portions of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function
Efficacy of fixed filtration for rapid kVp-switching dual energy x-ray systems
Yao, Yuan; Wang, Adam S.; Pelc, Norbert J.; Department of Radiology, Stanford University, Stanford, California 94305; Department of Electrical Engineering, Stanford University, Stanford, California 94305
2014-03-15
Purpose: Dose efficiency of dual kVp imaging can be improved if the two beams are filtered to remove photons in the common part of their spectra, thereby increasing spectral separation. While there are a number of advantages to rapid kVp-switching for dual energy, it may not be feasible to have two different filters for the two spectra. Therefore, the authors are interested in whether a fixed added filter can improve the dose efficiency of kVp-switching dual energy x-ray systems. Methods: The authors hypothesized that a K-edge filter would provide the energy selectivity needed to remove overlap of the spectra and hence increase the precision of material separation at constant dose. Preliminary simulations were done using calcium and water basis materials and 80 and 140 kVp x-ray spectra. Precision of the decomposition was evaluated based on the propagation of the Poisson noise through the decomposition function. Considering availability and cost, the authors chose a commercial Gd{sub 2}O{sub 2}S screen as the filter for their experimental validation. Experiments were conducted on a table-top system using a phantom with various thicknesses of acrylic and copper and 70 and 125 kVp x-ray spectra. The authors kept the phantom exposure roughly constant with and without filtration by adjusting the tube current. The filtered and unfiltered raw data of both low and high energy were decomposed into basis material and the variance of the decomposition for each thickness pair was calculated. To evaluate the filtration performance, the authors measured the ratio of material decomposition variance with and without filtration. Results: Simulation results show that the ideal filter material depends on the object composition and thickness, and ranges across the lanthanide series, with higher atomic number filters being preferred for more attenuating objects. Variance reduction increases with filter thickness, and substantial reductions (40%) can be achieved with a 2 loss in
SU-E-T-550: Modulation Index for VMAT
Park, J; Park, S; Kim, J; Kim, J; Kim, H; Carlson, J; Ye, S
2015-06-15
Purpose: To present modulation indices (MIs) for volumetric modulated arc therapy (VMAT). Methods: A total of 40 VMAT plans were retrospectively selected. To investigate the delivery accuracy of each VMAT plan, gamma passing rates, differences in modulating parameters between plans and log files, and differences between the original plans and the plans reconstructed with the log files were acquired. A modulation index (MIt) was designed by multiplications of the weighted quantifications of MLC speeds, MLC accelerations, gantry accelerations and dose-rate variations. Textural features including angular second moment, inverse difference moment, contrast, variance, correlation and entropy were calculated from the fluences of each VMAT plan. To test the performance of suggested MIs, Spearman’s rank correlation coefficients (r) with the plan delivery accuracy were calculated. Conventional modulation indices for VMAT including the modulation complexity score for VMAT (MCSv), leaf travel modulation complexity score (LTMCS) and MI by Li & Xing were calculated, and their correlations were also analyzed in the same way. Results: The r values of contrast (particular displacement distance, d = 1), variance (d = 1), MIt, MCSv, LTMCS and MI by Li&Xing to the local gamma passing rates with 2%/2 mm were 0.547 (p < 0.001), 0.519 (p < 0.001), −0.658 (p < 0.001), 0.186 (p = 0.251), 0.312 (p = 0.05) and −0.455 (p = 0.003), respectively. The r values of those to the MLC errors were −0.863, −0.828, 0.917, −0.635, − 0.857 and 0.795, respectively (p < 0.001). For dose-volumetric parameters, MIt showed higher statistically significant correlations than did the conventional modulation indices. Conclusion: The MIt, contrast (d = 1) and variance (d = 1) showed good performance to predict the VMAT delivery accuracy showing higher correlations to the results of various types of verification methods for VMAT. This work was in part supported by the National Research Foundation of
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-11-15
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Uncertainty in Integrated Assessment Scenarios
Mort Webster
2005-10-17
The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean
Real-Time Active Cosmic Neutron Background Reduction Methods
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Mitchell, Stephen; Guss, Paul
2013-09-01
Neutron counting using large arrays of pressurized 3He proportional counters from an aerial system or in a maritime environment suffers from the background counts from the primary cosmic neutrons and secondary neutrons caused by cosmic ray-induced mechanisms like spallation and charge-exchange reaction. This paper reports the work performed at the Remote Sensing Laboratory–Andrews (RSL-A) and results obtained when using two different methods to reduce the cosmic neutron background in real time. Both methods used shielding materials with a high concentration (up to 30% by weight) of neutron-absorbing materials, such as natural boron, to remove the low-energy neutron flux from the cosmic background as the first step of the background reduction process. Our first method was to design, prototype, and test an up-looking plastic scintillator (BC-400, manufactured by Saint Gobain Corporation) to tag the cosmic neutrons and then create a logic pulse of a fixed time duration (~120 μs) to block the data taken by the neutron counter (pressurized 3He tubes running in a proportional counter mode). The second method examined the time correlation between the arrival of two successive neutron signals to the counting array and calculated the excess of variance (Feynman variance Y2F)1 in the neutron count distribution from Poisson distribution. The dilution of this variance from cosmic background values ideally would signal the presence of man-made neutrons.2 The first method has been technically successful in tagging the neutrons in the cosmic-ray flux and preventing them from being counted in the 3He tube array by electronic veto—field measurement work shows the efficiency of the electronic veto counter to be about 87%. The second method has successfully derived an empirical relationship between the percentile non-cosmic component in a neutron flux and the Y2F of the measured neutron count distribution. By using shielding materials alone, approximately 55% of the neutron flux
Zhao, Chun; Liu, Xiaohong; Qian, Yun; Yoon, Jin-Ho; Hou, Zhangshuan; Lin, Guang; McFarlane, Sally A.; Wang, Hailong; Yang, Ben; Ma, Po-Lun; Yan, Huiping; Bao, Jie
2013-11-08
In this study, we investigated the sensitivity of net radiative fluxes (FNET) at the top of atmosphere (TOA) to 16 selected uncertain parameters mainly related to the cloud microphysics and aerosol schemes in the Community Atmosphere Model version 5 (CAM5). We adopted a quasi-Monte Carlo (QMC) sampling approach to effectively explore the high dimensional parameter space. The output response variables (e.g., FNET) were simulated using CAM5 for each parameter set, and then evaluated using generalized linear model analysis. In response to the perturbations of these 16 parameters, the CAM5-simulated global annual mean FNET ranges from -9.8 to 3.5 W m-2 compared to the CAM5-simulated FNET of 1.9 W m-2 with the default parameter values. Variance-based sensitivity analysis was conducted to show the relative contributions of individual parameter perturbation to the global FNET variance. The results indicate that the changes in the global mean FNET are dominated by those of cloud forcing (CF) within the parameter ranges being investigated. The size threshold parameter related to auto-conversion of cloud ice to snow is confirmed as one of the most influential parameters for FNET in the CAM5 simulation. The strong heterogeneous geographic distribution of FNET variation shows parameters have a clear localized effect over regions where they are acting. However, some parameters also have non-local impacts on FNET variance. Although external factors, such as perturbations of anthropogenic and natural emissions, largely affect FNET variations at the regional scale, their impact is weaker than that of model internal parameters in terms of simulating global mean FNET in this study. The interactions among the 16 selected parameters contribute a relatively small portion of the total FNET variations over most regions of the globe. This study helps us better understand the CAM5 model behavior associated with parameter uncertainties, which will aid the next step of reducing model
Scaling impacts on environmental controls and spatial heterogeneity of soil organic carbon stocks
Mishra, U.; Riley, W. J.
2015-07-02
The spatial heterogeneity of land surfaces affects energy, moisture, and greenhouse gas exchanges with the atmosphere. However, representing the heterogeneity of terrestrial hydrological and biogeochemical processes in Earth system models (ESMs) remains a critical scientific challenge. We report the impact of spatial scaling on environmental controls, spatial structure, and statistical properties of soil organic carbon (SOC) stocks across the US state of Alaska. We used soil profile observations and environmental factors such as topography, climate, land cover types, and surficial geology to predict the SOC stocks at a 50 m spatial scale. These spatially heterogeneous estimates provide a data setmore » with reasonable fidelity to the observations at a sufficiently high resolution to examine the environmental controls on the spatial structure of SOC stocks. We upscaled both the predicted SOC stocks and environmental variables from finer to coarser spatial scales (s = 100, 200, and 500 m and 1, 2, 5, and 10 km) and generated various statistical properties of SOC stock estimates. We found different environmental factors to be statistically significant predictors at different spatial scales. Only elevation, temperature, potential evapotranspiration, and scrub land cover types were significant predictors at all scales. The strengths of control (the median value of geographically weighted regression coefficients) of these four environmental variables on SOC stocks decreased with increasing scale and were accurately represented using mathematical functions (R2 = 0.83–0.97). The spatial structure of SOC stocks across Alaska changed with spatial scale. Although the variance (sill) and unstructured variability (nugget) of the calculated variograms of SOC stocks decreased exponentially with scale, the correlation length (range) remained relatively constant across scale. The variance of predicted SOC stocks decreased with spatial scale over the range of 50 m to ~ 500 m
Scaling impacts on environmental controls and spatial heterogeneity of soil organic carbon stocks
Mishra, U.; Riley, W. J.
2015-01-27
The spatial heterogeneity of land surfaces affects energy, moisture, and greenhouse gas exchanges with the atmosphere. However, representing heterogeneity of terrestrial hydrological and biogeochemical processes in earth system models (ESMs) remains a critical scientific challenge. We report the impact of spatial scaling on environmental controls, spatial structure, and statistical properties of soil organic carbon (SOC) stocks across the US state of Alaska. We used soil profile observations and environmental factors such as topography, climate, land cover types, and surficial geology to predict the SOC stocks at a 50 m spatial scale. These spatially heterogeneous estimates provide a dataset with reasonablemore » fidelity to the observations at a sufficiently high resolution to examine the environmental controls on the spatial structure of SOC stocks. We upscaled both the predicted SOC stocks and environmental variables from finer to coarser spatial scales (s = 100, 200, 500 m, 1, 2, 5, 10 km) and generated various statistical properties of SOC stock estimates. We found different environmental factors to be statistically significant predictors at different spatial scales. Only elevation, temperature, potential evapotranspiration, and scrub land cover types were significant predictors at all scales. The strengths of control (the median value of geographically weighted regression coefficients) of these four environmental variables on SOC stocks decreased with increasing scale and were accurately represented using mathematical functions (R2 = 0.83–0.97). The spatial structure of SOC stocks across Alaska changed with spatial scale. Although the variance (sill) and unstructured variability (nugget) of the calculated variograms of SOC stocks decreased exponentially with scale, the correlation length (range) remained relatively constant across scale. The variance of predicted SOC stocks decreased with spatial scale over the range of 50 to ~ 500 m, and remained
Shirodkar, P.V. Mesquita, A.; Pradhan, U.K.; Verlekar, X.N.; Babu, M.T.; Vethamony, P.
2009-04-15
Water quality parameters (temperature, pH, salinity, DO, BOD, suspended solids, nutrients, PHc, phenols, trace metals-Pb, Cd and Hg, chlorophyll-a (chl-a) and phaeopigments) and the sediment quality parameters (total phosphorous, total nitrogen, organic carbon and trace metals) were analysed from samples collected at 15 stations along 3 transects off Karnataka coast (Mangalore harbour in the south to Suratkal in the north), west coast of India during 2007. The analyses showed high ammonia off Suratkal, high nitrite (NO{sub 2}-N) and nitrate (NO{sub 3}-N) in the nearshore waters off Kulai and high nitrite (NO{sub 2}-N) and ammonia (NH{sub 3}-N) in the harbour area. Similarly, high petroleum hydrocarbon (PHc) values were observed near the harbour, while phenols remained high in the nearshore waters of Kulai and Suratkal. Significantly, high concentrations of cadmium and mercury with respect to the earlier studies were observed off Kulai and harbour regions, respectively. R-mode varimax factor analyses were applied separately to surface and bottom water data sets due to existing stratification in the water column caused by riverine inflow and to sediment data. This helped to understand the interrelationships between the variables and to identify probable source components for explaining the environmental status of the area. Six factors (each for surface and bottom waters) were found responsible for variance (86.9% in surface and 82.4% in bottom) in the coastal waters between Mangalore and Suratkal. In sediments, 4 factors explained 86.8% of the observed total variance. The variances indicated addition of nutrients and suspended solids to the coastal waters due to weathering and riverine transport and are categorized as natural sources. The observed contamination of coastal waters indicated anthropogenic inputs of Cd and phenol from industrial effluent sources at Kulai and Suratkal, ammonia from wastewater discharges off Kulai and harbour, PHc and Hg from boat traffic
Temporary Cementitious Sealers in Enhanced Geothermal Systems
Sugama T.; Pyatina, T.; Butcher, T.; Brothers, L.; Bour, D.
2011-12-31
Unlike conventional hydrothennal geothermal technology that utilizes hot water as the energy conversion resources tapped from natural hydrothermal reservoir located at {approx}10 km below the ground surface, Enhanced Geothermal System (EGS) must create a hydrothermal reservoir in a hot rock stratum at temperatures {ge}200 C, present in {approx}5 km deep underground by employing hydraulic fracturing. This is the process of initiating and propagating a fracture as well as opening pre-existing fractures in a rock layer. In this operation, a considerable attention is paid to the pre-existing fractures and pressure-generated ones made in the underground foundation during drilling and logging. These fractures in terms of lost circulation zones often cause the wastage of a substantial amount of the circulated water-based drilling fluid or mud. Thus, such lost circulation zones must be plugged by sealing materials, so that the drilling operation can resume and continue. Next, one important consideration is the fact that the sealers must be disintegrated by highly pressured water to reopen the plugged fractures and to promote the propagation of reopened fractures. In response to this need, the objective of this phase I project in FYs 2009-2011 was to develop temporary cementitious fracture sealing materials possessing self-degradable properties generating when {ge} 200 C-heated scalers came in contact with water. At BNL, we formulated two types of non-Portland cementitious systems using inexpensive industrial by-products with pozzolanic properties, such as granulated blast-furnace slag from the steel industries, and fly ashes from coal-combustion power plants. These byproducts were activated by sodium silicate to initiate their pozzolanic reactions, and to create a cemetitious structure. One developed system was sodium silicate alkali-activated slag/Class C fly ash (AASC); the other was sodium silicate alkali-activated slag/Class F fly ash (AASF) as the binder of temper
Zhang, W. F.; Nishimula, T.; Nagashio, K.; Kita, K.; Toriumi, A.
2013-03-11
We report a consistent conduction band offset (CBO) at a GeO{sub 2}/Ge interface determined by internal photoemission spectroscopy (IPE) and charge-corrected X-ray photoelectron spectroscopy (XPS). IPE results showed that the CBO value was larger than 1.5 eV irrespective of metal electrode and substrate type variance, while an accurate determination of valence band offset (VBO) by XPS requires a careful correction of differential charging phenomena. The VBO value was determined to be 3.60 {+-} 0.2 eV by XPS after charge correction, thus yielding a CBO (1.60 {+-} 0.2 eV) in excellent agreement with the IPE results. Such a large CBO (>1.5 eV) confirmed here is promising in terms of using GeO{sub 2} as a potential passivation layer for future Ge-based scaled CMOS devices.
Calculational method for determination of carburetor icing rate
Nazarov, V.I.; Emel'yanov, V.E.; Gonopol'ska, A.F.; Zaslavskii, A.A.
1986-05-01
This paper investigates the dependence of the carburetor icing rate on the density, distillation curve, and vapor pressure of gasoline. More than 100 gasoline samples, covering a range of volatility, were investigated. No clear-cut relationship can be observed between the carburetor icing rate and any specific property index of the gasoline. At the same time, there are certain variables that cannot be observed directly but can be interpreted readily through which the influence of gasoline quality on the carburetor icing rate can be explained. The conversion to these variables was accomplished with regard for the values of the variance and correlation of the carburetor icing rate. Equations are presented that may be used to predict the carburetor icing rate when using gasolines differing in quality. The equations can also determine the need for incorporating antiicing additives in the gasoline.
LIFE ESTIMATION OF HIGH LEVEL WASTE TANK STEEL FOR F-TANK FARM CLOSURE PERFORMANCE ASSESSMENT - 9310
Subramanian, K; Bruce Wiersma, B; Stephen Harris, S
2009-01-12
High level radioactive waste (HLW) is stored in underground carbon steel storage tanks at the Savannah River Site. The underground tanks will be closed by removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations, and severing/sealing external penetrations. The life of the carbon steel materials of construction in support of the performance assessment has been completed. The estimation considered general and localized corrosion mechanisms of the tank steel exposed to grouted conditions. A stochastic approach was followed to estimate the distributions of failures based upon mechanisms of corrosion accounting for variances in each of the independent variables. The methodology and results used for one-type of tank is presented.
Planck constraints on monodromy inflation
Easther, Richard; Flauger, Raphael E-mail: flauger@ias.edu
2014-02-01
We use data from the nominal Planck mission to constrain modulations in the primordial power spectrum associated with monodromy inflation. The largest improvement in fit relative to the unmodulated model has Δχ{sup 2} ≈ 10 and we find no evidence for a primordial signal, in contrast to a previous analysis of the WMAP9 dataset, for which Δχ{sup 2} ≈ 20. The Planck and WMAP9 results are broadly consistent on angular scales where they are expected to agree as far as best-fit values are concerned. However, even on these scales the significance of the signal is reduced in Planck relative to WMAP, and is consistent with a fit to the ''noise'' associated with cosmic variance. Our results motivate both a detailed comparison between the two experiments and a more careful study of the theoretical predictions of monodromy inflation.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
High-precision calculation of the strange nucleon electromagnetic form factors
Green, Jeremy; Meinel, Stefan; Engelhardt, Michael G.; Krieg, Stefan; Laeuchli, Jesse; Negele, John W.; Orginos, Kostas; Pochinsky, Andrew; Syritsyn, Sergey
2015-08-26
We report a direct lattice QCD calculation of the strange nucleon electromagnetic form factors G^{s}_{E} and G^{s}_{M} in the kinematic range 0 ≤ Q^{2} ≤ 1.2GeV^{2}. For the first time, both G^{s}_{E} and G^{s}_{M} are shown to be nonzero with high significance. This work uses closer-to-physical lattice parameters than previous calculations, and achieves an unprecented statistical precision by implementing a recently proposed variance reduction technique called hierarchical probing. We perform model-independent fits of the form factor shapes using the z-expansion and determine the strange electric and magnetic radii and magnetic moment. As a result, we compare our results to parity-violating electron-proton scattering data and to other theoretical studies.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
An enhanced HOWFARLESS option for DOSXYZnrc simulations of slab geometries
Babcock, Kerry; Cranmer-Sargison, Gavin; Sidhu, Narinder
2008-09-15
The Monte Carlo code DOSXYZnrc is a valuable instrument for calculating absorbed dose within a three-dimensional Cartesian geometry. DOSXYZnrc includes several variance reduction techniques used to increase the efficiency of the Monte Carlo calculation. One such technique is HOWFARLESS which is used to increase the efficiency of beam commissioning calculations in homogeneous phantoms. The authors present an enhanced version of HOWFARLESS which extends the application to include phantoms inhomogeneous in one dimension. When the enhanced HOWFARLESS was used, efficiency increases as high as 14 times were observed without any loss in dose accuracy. The efficiency gains of an enhanced HOWFARLESS simulation was found to be dependent on both slab geometry and slab density. As the number of two-dimensional voxel layers per slab increases, so does the efficiency gain. Also, as the mass density of a slab is decreased, the efficiency gains increase.
Energy Science and Technology Software Center (OSTI)
2012-01-05
ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less
The Atacama Cosmology Telescope: cross correlation with Planck maps
Louis, Thibaut; Calabrese, Erminia; Dunkley, Joanna; Næss, Sigurd; Addison, Graeme E.; Hincks, Adam D.; Hasselfield, Matthew; Hlozek, Renée; Bond, J. Richard; Hajian, Amir; Das, Sudeep; Devlin, Mark J.; Dünner, Rolando; Infante, Leopoldo; Gralla, Megan; Marriage, Tobias A.; Huffenberger, Kevin; Kosowsky, Arthur; Moodley, Kavilan; Niemack, Michael D.; and others
2014-07-01
We present the temperature power spectrum of the Cosmic Microwave Background obtained by cross-correlating maps from the Atacama Cosmology Telescope (ACT) at 148 and 218 GHz with maps from the Planck satellite at 143 and 217 GHz, in two overlapping regions covering 592 square degrees. We find excellent agreement between the two datasets at both frequencies, quantified using the variance of the residuals between the ACT power spectra and the ACT × Planck cross-spectra. We use these cross-correlations to measure the calibration of the ACT data at 148 and 218 GHz relative to Planck, to 0.7% and 2% precision respectively. We find no evidence for anisotropy in the calibration parameter. We compare the Planck 353 GHz power spectrum with the measured amplitudes of dust and cosmic infrared background (CIB) of ACT data at 148 and 218 GHz. We also compare planet and point source measurements from the two experiments.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Detection limits for real-time source water monitoring using indigenous freshwater microalgae
Rodriguez Jr, Miguel; Greenbaum, Elias
2009-01-01
This research identified toxin detection limits using the variable fluorescence of naturally occurring microalgae in source drinking water for five chemical toxins with different molecular structures and modes of toxicity. The five chemicals investigated were atrazine, Diuron, paraquat, methyl parathion, and potassium cyanide. Absolute threshold sensitivities of the algae for detection of the toxins in unmodified source drinking water were measured. Differential kinetics between the rate of action of the toxins and natural changes in algal physiology, such as diurnal photoinhibition, are significant enough that effects of the toxin can be detected and distinguished from the natural variance. This is true even for physiologically impaired algae where diminished photosynthetic capacity may arise from uncontrollable external factors such as nutrient starvation. Photoinhibition induced by high levels of solar radiation is a predictable and reversible phenomenon that can be dealt with using a period of dark adaption of 30 minutes or more.
FLUOR HANFORD SAFETY MANAGEMENT PROGRAMS
GARVIN, L J; JENSEN, M A
2004-04-13
This document summarizes safety management programs used within the scope of the ''Project Hanford Management Contract''. The document has been developed to meet the format and content requirements of DOE-STD-3009-94, ''Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses''. This document provides summary descriptions of Fluor Hanford safety management programs, which Fluor Hanford nuclear facilities may reference and incorporate into their safety basis when producing facility- or activity-specific documented safety analyses (DSA). Facility- or activity-specific DSAs will identify any variances to the safety management programs described in this document and any specific attributes of these safety management programs that are important for controlling potentially hazardous conditions. In addition, facility- or activity-specific DSAs may identify unique additions to the safety management programs that are needed to control potentially hazardous conditions.
Fabrication of FCC-SiO{sub 2} colloidal crystals using the vertical convective self-assemble method
Castaeda-Uribe, O. A.; Salcedo-Reyes, J. C.; Mndez-Pinzn, H. A.; Pedroza-Rodrguez, A. M.
2014-05-15
In order to determine the optimal conditions for the growth of high-quality 250 nm-SiO{sub 2} colloidal crystals by the vertical convective self-assemble method, the Design of Experiments (DoE) methodology is applied. The influence of the evaporation temperature, the volume fraction, and the pH of the colloidal suspension is studied by means of an analysis of variance (ANOVA) in a 3{sup 3} factorial design. Characteristics of the stacking lattice of the resulting colloidal crystals are determined by scanning electron microscopy and angle-resolved transmittance spectroscopy. Quantitative results from the statistical test show that the temperature is the most critical factor influencing the quality of the colloidal crystal, obtaining highly ordered structures with FCC stacking lattice at a growth temperature of 40C.
Doppler Lidar Vertical Velocity Statistics Value-Added Product
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.; Riihimaki, L. D.
2015-07-01
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosis from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.
Aab, Alexander
2014-12-31
We report a study of the distributions of the depth of maximum, Xmax, of extensive air-shower profiles with energies above 1017.8 eV as observed with the fluorescence telescopes of the Pierre Auger Observatory. The analysis method for selecting a data sample with minimal sampling bias is described in detail as well as the experimental cross-checks and systematic uncertainties. Furthermore, we discuss the detector acceptance and the resolution of the Xmax measurement and provide parametrizations thereof as a function of energy. Finally, the energy dependence of the mean and standard deviation of the Xmax distributions are compared to air-shower simulations formore » different nuclear primaries and interpreted in terms of the mean and variance of the logarithmic mass distribution at the top of the atmosphere.« less
Sukhovoj, A. M. Khitrov, V. A.
2013-01-15
A modified model is developed for describing the distribution of random resonance width for any nuclei. The model assumes the coexistence in a nucleus of one or several partial radiative and neutron amplitudes for respective resonance widths, these amplitudes differing in their parameters. Also, it is assumed that amplitude can be described by a Gaussian curve characterized by a nonzero mean value and a variance not equal to unity and that their most probable values can be obtained with the highest reliability from approximations of cumulative sums of respective widths. An analysis of data for 157 sets of neutron widths for 0 {<=} l {<=} 3 and for 56 sets of total radiative widths has been performed to date. The basic result of this analysis is the following: both for neutron and for total radiative widths, the experimental set of resonance width can be represented with a rather high probability in the form of a superposition of k {<=} 4 types differing in mean amplitude parameters.
Thermal properties of Ni-substituted LaCoO{sub 3} perovskite
Thakur, Rasna Thakur, Rajesh K. Gaur, N. K.; Srivastava, Archana
2014-04-24
With the objective of exploring the unknown thermodynamic behavior of LaCo{sub 1?x}Ni{sub x}O{sub 3} family, we present here an investigation of the temperature-dependent (10K ? T ? 300K) thermodynamic properties of LaCo{sub 1?x}Ni{sub x}O{sub 3} (x=0.1, 0.3, 0.5). The specific heat of LaCoO3 with Ni doping in the perovskite structure at B-site has been studied by means of a Modified Rigid Ion Model (MRIM). This replacement introduces large cation variance at B-site hence the specific heat increases appreciably. We report here probably for the first time the cohesive energy, Reststrahlen frequency (?) and Debye temperature (?{sub D}) of LaCo{sub 1?x}Ni{sub x}O{sub 3} compounds.
Enhanced pinning in mixed rare earth-123 films
Driscoll, Judith L.; Foltyn, Stephen R.
2009-06-16
An superconductive article and method of forming such an article is disclosed, the article including a substrate and a layer of a rare earth barium cuprate film upon the substrate, the rare earth barium cuprate film including two or more rare earth metals capable of yielding a superconductive composition where ion size variance between the two or more rare earth metals is characterized as greater than zero and less than about 10.times.10.sup.-4, and the rare earth barium cuprate film including two or more rare earth metals is further characterized as having an enhanced critical current density in comparison to a standard YBa.sub.2Cu.sub.3O.sub.y composition under identical testing conditions.
Statistical process control applied to the liquid-fed ceramic melter process
Pulsipher, B.A.; Kuhn, W.L.
1987-09-01
In this report, an application of control charts to the apparent feed composition of a Liquid-Fed Ceramic Melter (LFCM) is demonstrated by using results from a simulation of the LFCM system. Usual applications of control charts require the assumption of uncorrelated observations over time. This assumption is violated in the LFCM system because of the heels left in tanks from previous batches. Methods for dealing with this problem have been developed to create control charts for individual batches sent to the feed preparation tank (FPT). These control charts are capable of detecting changes in the process average as well as changes in the process variation. All numbers reported in this document were derived from a simulated demonstration of a plausible LFCM system. In practice, site-specific data must be used as input to a simulation tailored to that site. These data directly affect all variance estimates used to develop control charts. 64 refs., 3 figs., 2 tabs.
Monitoring the progress of anytime problem-solving
Hansen, E.A.; Zilberstein, S.
1996-12-31
Anytime algorithms offer a tradeoff between solution quality and computation time that has proved useful in applying artificial intelligence techniques to time-critical problems. To exploit this tradeoff, a system must be able to determine the best time to stop deliberation and act on the currently available solution. When the rate of improvement of solution quality is uncertain, monitoring the progress of the algorithm can improve the utility of the system. This paper introduces a technique for run-time monitoring of anytime algorithms that is sensitive to the variance of the algorithm`s performance, the time-dependent utility of a solution, the ability of the run-time monitor to estimate the quality of the currently available solution, and the cost of monitoring. The paper examines the conditions under which the technique is optimal and demonstrates its applicability.
The use of microdosimetric techniques in radiation protection measurements
Chen, J.; Hsu, H.H.; Casson, W.H.; Vasilik, D.G.
1997-01-01
A major objective of radiation protection is to determine the dose equivalent for routine radiation protection applications. As microdosimetry has developed over approximately three decades, its most important application has been in measuring radiation quality, especially in radiation fields of unknown or inadequately known energy spectra. In these radiation fields, determination of dose equivalent is not straightforward; however, the use of microdosimetric principles and techniques could solve this problem. In this paper, the authors discuss the measurement of lineal energy, a microscopic analog to linear energy transfer, and demonstrate the development and implementation of the variance-covariance method, a novel method in experimental microdosimetry. This method permits the determination of dose mean lineal energy, an essential parameter of radiation quality, in a radiation field of unknown spectrum, time-varying dose rate, and high dose rate. Real-time monitoring of changes in radiation quality can also be achieved by using microdosimetric techniques.
Deconstructing Solar Photovoltaic Pricing: The Role of Market Structure, Technology and Policy
Office of Energy Efficiency and Renewable Energy (EERE)
Solar photovoltaic (PV) system prices in the United States are considerably different both across geographic locations and within a given location. Variances in price may arise due to state and federal policies, differences in market structure, and other factors that influence demand and costs. This paper examines the relative importance of such factors on the stability of solar PV system prices in the United States using a detailed dataset of roughly 100,000 recent residential and small commercial installations. The paper finds that PV system prices differ based on characteristics of the systems. More interestingly, evidence suggests that search costs and imperfect competition affect solar PV pricing. Installer density substantially lowers prices, while regions with relatively generous financial incentives for solar PV are associated with higher prices.
Veil, J.A.
1994-06-01
This paper examines the economic and environmental impact to the power industry of limiting thermal mixing zones to 1000 feet and eliminating the Clean Water Act {section}316(a) variance. Power companies were asked what they would do if these two conditions were imposed. Most affected plants would retrofit cooling towers and some would retrofit diffusers. Assuming that all affected plants would proportionally follow the same options as the surveyed plants, the estimated capital cost of retrofitting cooling towers or diffusers at all affected plants exceeds $20 billion. Since both cooling towers and diffusers exert an energy penalty on a plant`s output, the power companies must generate additional power. The estimated cost of the additional power exceeds $10 billion over 20 years. Generation of the extra power would emit over 8 million tons per year of additional carbon dioxide. Operation of the new cooling towers would cause more than 1.5 million gallons per minute of additional evaporation.
An optical beam frequency reference with 10{sup -14} range frequency instability
McFerran, J. J.; Hartnett, J. G.; Luiten, A. N. [School of Physics, University of Western Australia, 35 Stirling Highway, Crawley, 6009 Western Australia (Australia)
2009-07-20
The authors report on a thermal beam optical frequency reference with a fractional frequency instability of 9.2x10{sup -14} at 1 s reducing to 2.0x10{sup -14} at 64 s before slowly rising. The {sup 1}S{sub 0}{r_reversible}{sup 3}P{sub 1} intercombination line in neutral {sup 40}Ca is used as a frequency discriminator. A diode laser at 423 nm probes the ground state population after a Ramsey-Borde sequence of 657 nm light-field interactions on the atoms. The measured fractional frequency instability is an order of magnitude improvement on previously reported thermal beam optical clocks. The photon shot-noise of the read-out produces a limiting square root {lambda}-variance of 7x10{sup -14}/{radical}({tau})
Model for spectral and chromatographic data
Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA
2002-11-26
A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.
Identification of high shears and compressive discontinuities in the inner heliosphere
Greco, A.; Perri, S.
2014-04-01
Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed that at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.
Unconventional Fermi surface in an insulating state
Harrison, Neil; Tan, B. S.; Hsu, Y. -T.; Zeng, B.; Hatnean, M. Ciomaga; Zhu, Z.; Hartstein, M.; Kiourlappou, M.; Srivastava, A.; Johannes, M. D.; Murphy, T. P.; Park, J. -H.; Balicas, L.; Lonzarich, G. G.; Balakrishnan, G.; Sebastian, Suchitra E.
2015-07-17
Insulators occur in more than one guise; a recent finding was a class of topological insulators, which host a conducting surface juxtaposed with an insulating bulk. Here, we report the observation of an unusual insulating state with an electrically insulating bulk that simultaneously yields bulk quantum oscillations with characteristics of an unconventional Fermi liquid. We present quantum oscillation measurements of magnetic torque in high-purity single crystals of the Kondo insulator SmB_{6}, which reveal quantum oscillation frequencies characteristic of a large three-dimensional conduction electron Fermi surface similar to the metallic rare earth hexaborides such as PrB_{6} and LaB_{6}. As a result, the quantum oscillation amplitude strongly increases at low temperatures, appearing strikingly at variance with conventional metallic behavior.
Assessment of global warming effect on the level of extremes and intra-annual structure
Lobanov, V.A.
1997-12-31
In this research a new approach for the parametrization of intra-annual Variations has been developed that is based on the poly-linear decomposition and relationships with average climate conditions. This method allows to divide the complex intra-annual variations during every year into two main parts: climate and synoptic processes. In this case, the climate process is presented by two coefficients (B1, B0) of linear function between the particular year data and average intra-year conditions over the long-term period. Coefficient B1 is connected with an amplitude of intra-annual function and characterizes the extremes events and BO-coefficient obtaines the level of climate conditions realization in the particular year. The synoptic process is determined as the remainders or errors of every year linear function or their generalized parameter, such as variance.
Critical dynamics of cluster algorithms in the dilute Ising model
Hennecke, M. Universitaet Karlsruhe ); Heyken, U. )
1993-08-01
Autocorrelation times for thermodynamic quantities at [Tc] are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wand and Wolff cluster algorithms. The results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. It is concluded that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected. 33 refs., 5 figs., 2 tabs.
Cyberspace Security Econometrics System (CSES) - U.S. Copyright TXu 1-901-039
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T; Lantz, Margaret W; Hauser, Katie R
2014-01-01
Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation. The Cyberspace Security Econometrics System (CSES) provides a measure (i.e., a quantitative indication) of reliability, performance, and/or safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. For a given stakeholder, CSES accounts for the variance that may exist among the stakes one attaches to meeting each requirement. The basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural and mathematical underpinnings contained in this copyright.
Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T
2016-01-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Kreinovich, Vladik; Oberkampf, William Louis; Ginzburg, Lev; Ferson, Scott; Hajagos, Janos
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
COMMENT ON TRITIUM ABSORPTION-DESORPTION CHARACTERISTICS OF LANI4.25AL0.75
Walters, T
2007-04-10
The thermodynamic data for LaNi{sub 4.25}Al{sub 0.75} tritide, reported by Wang et al. (W.-d. Wang et al., J. Alloys Compd. (2006) doi:10.1016/j.jallcom.206.09.122), are in variance with our published data. The plateau pressures for the P-C-T isotherms at all temperatures are significantly lower than published data. As a result, the derived thermodynamic parameters, {Delta}H{sup o} and {Delta}S{sup o}, are questionable. Using the thermodynamic parameters derived from the data reported by Wang et al. will result in under estimating the expected pressures, and therefore not provide the desired performance for storing and processing tritium.
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
On the local variation of the Hubble constant
Odderskov, Io; Hannestad, Steen [Department of Physics and Astronomy, University of Aarhus, DK-8000 Aarhus C (Denmark); Haugblle, Troels, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: troels.haugboelle@snm.ku.dk [Centre for Star and Planet Formation, Natural History Museum of Denmark and Niels Bohr Institute University of Copenhagen, DK-1350 Copenhagen (Denmark)
2014-10-01
We have carefully studied how local measurements of the Hubble constant, H{sub 0}, can be influenced by a variety of different parameters related to survey depth, size, and fraction of the sky observed, as well as observer position in space. Our study is based on N-body simulations of structure in the standard ?CDM model and our conclusion is that the expected variance in measurements of H{sub 0} is far too small to explain the current discrepancy between the low value of H{sub 0} inferred from measurements of the cosmic microwave background (CMB) by the Planck collaboration and the value measured directly in the local universe by use of Type Ia supernovae. This conclusion is very robust and does not change with different assumptions about effective sky coverage and depth of the survey or observer position in space.
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Application of Entry-Time Processes to Asset Management in Nuclear Power Plants
Nelson, Paul; Wang, Shuwen; Kee, Ernie J.
2006-07-01
The entry-time approach to dynamic reliability is based upon computational solution of the Chapman-Kolmogorov (generalized state-transition) equations underlying a certain class of marked point processes. Previous work has verified a particular finite-difference approach to computational solution of these equations. The objective of this work is to illustrate the potential application of the entry-time approach to risk-informed asset management (RIAM) decisions regarding maintenance or replacement of major systems within a plant. Results are presented in the form of plots, with replacement/maintenance period as a parameter, of expected annual revenue, along with annual variance and annual skewness as indicators of associated risks. Present results are for a hypothetical system, to illustrate the capability of the approach, but some considerations related to potential application of this approach to nuclear power plants are discussed. (authors)
Cyberspace Security Econometrics System (CSES)
Energy Science and Technology Software Center (OSTI)
2012-07-27
Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing complance pressures require careful consideration and adaption. The CSES provides a measure (i.e. a quantitative indication) of reliability, performance, and/or safety of a system that accounts for themore » criticality of each requirement as a function of one or more stakeholders' interests in that requirement. For a given stakeholder, CSES accounts for the variance that may exist among the stakes one attaches to meeting each requirement.« less
Method and apparatus for detection of chemical vapors
Mahurin, Shannon Mark; Dai, Sheng; Caja, Josip
2007-05-15
The present invention is a gas detector and method for using the gas detector for detecting and identifying volatile organic and/or volatile inorganic substances present in unknown vapors in an environment. The gas detector comprises a sensing means and a detecting means for detecting electrical capacitance variance of the sensing means and for further identifying the volatile organic and volatile inorganic substances. The sensing means comprises at least one sensing unit and a sensing material allocated therein the sensing unit. The sensing material is an ionic liquid which is exposed to the environment and is capable of dissolving a quantity of said volatile substance upon exposure thereto. The sensing means constitutes an electrochemical capacitor and the detecting means is in electrical communication with the sensing means.
Dynamical mass generation in unquenched QED using the Dyson-Schwinger equations
Kızılersü, Ayse; Sizer, Tom; Pennington, Michael R.; Williams, Anthony G.; Williams, Richard
2015-03-13
We present a comprehensive numerical study of dynamical mass generation for unquenched QED in four dimensions, in the absence of four-fermion interactions, using the Dyson-Schwinger approach. We begin with an overview of previous investigations of criticality in the quenched approximation. To this we add an analysis using a new fermion-antifermion-boson interaction ansatz, the Kizilersu-Pennington (KP) vertex, developed for an unquenched treatment. After surveying criticality in previous unquenched studies, we investigate the performance of the KP vertex in dynamical mass generation using a renormalized fully unquenched system of equations. This we compare with the results for two hybrid vertices incorporating themore » Curtis-Pennington vertex in the fermion equation. We conclude that the KP vertex is as yet incomplete, and its relative gauge-variance is due to its lack of massive transverse components in its design.« less
System for monitoring non-coincident, nonstationary process signals
Gross, Kenneth C.; Wegerich, Stephan W.
2005-01-04
An improved system for monitoring non-coincident, non-stationary, process signals. The mean, variance, and length of a reference signal is defined by an automated system, followed by the identification of the leading and falling edges of a monitored signal and the length of the monitored signal. The monitored signal is compared to the reference signal, and the monitored signal is resampled in accordance with the reference signal. The reference signal is then correlated with the resampled monitored signal such that the reference signal and the resampled monitored signal are coincident in time with each other. The resampled monitored signal is then compared to the reference signal to determine whether the resampled monitored signal is within a set of predesignated operating conditions.
Samedov, V. V.; Tulinov, B. M.
2011-07-01
Superconducting tunnel junction (STJ) detector consists of two layers of superconducting material separated by thin insulating barrier. An incident particle produces in superconductor excess nonequilibrium quasiparticles. Each quasiparticle in superconductor should be considered as quantum superposition of electron-like and hole-like excitations. This duality nature of quasiparticle leads to the effect of multi-tunneling. Quasiparticle starts to tunnel back and forth through the insulating barrier. After tunneling from biased electrode quasiparticle loses its energy via phonon emission. Eventually, the energy that equals to the difference in quasiparticle energy between two electrodes is deposited in the signal electrode. Because of the process of multi-tunneling, one quasiparticle can deposit energy more than once. In this work, the theory of branching cascade processes was applied to the process of energy deposition caused by the quasiparticle multi-tunneling. The formulae for the mean value and variance of the energy transferred by one quasiparticle into heat were derived. (authors)
“Lidar Investigations of Aerosol, Cloud, and Boundary Layer Properties Over the ARM ACRF Sites”
Ferrare, Richard; Turner, David
2015-01-13
Project goals; Characterize the aerosol and ice vertical distributions over the ARM NSA site, and in particular to discriminate between elevated aerosol layers and ice clouds in optically thin scattering layers; Characterize the water vapor and aerosol vertical distributions over the ARM Darwin site, how these distributions vary seasonally, and quantify the amount of water vapor and aerosol that is above the boundary layer; Use the high temporal resolution Raman lidar data to examine how aerosol properties vary near clouds; Use the high temporal resolution Raman lidar and Atmospheric Emitted Radiance Interferometer (AERI) data to quantify entrainment in optically thin continental cumulus clouds; and Use the high temporal Raman lidar data to continue to characterize the turbulence within the convective boundary layer and how the turbulence statistics (e.g., variance, skewness) is correlated with larger scale variables predicted by models.
Pilania, G.; Gubernatis, J. E.; Lookman, T.
2015-12-03
The role of dynamical (or Born effective) charges in classification of octet AB-type binary compounds between four-fold (zincblende/wurtzite crystal structures) and six-fold (rocksalt crystal structure) coordinated systems is discussed. We show that the difference in the dynamical charges of the fourfold and sixfold coordinated structures, in combination with Harrison’s polarity, serves as an excellent feature to classify the coordination of 82 sp–bonded binary octet compounds. We use a support vector machine classifier to estimate the average classification accuracy and the associated variance in our model where a decision boundary is learned in a supervised manner. Lastly, we compare the out-of-sample classification accuracy achieved by our feature pair with those reported previously.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Curvature of the freeze-out line in heavy ion collisions
Bazavov, A.; Ding, H. -T.; Hegde, P.; Kaczmarek, O.; Karsch, F.; Laermann, E.; Mukherjee, Swagato; Ohno, H.; Petreczky, P.; Schmidt, C.; et al
2016-01-28
Here, we calculate the mean and variance of net-baryon number and net-electric charge distributions from quantum chromodynamics (QCD) using a next-to-leading order Taylor expansion in terms of temperature and chemical potentials. Moreover, these expansions with experimental data from STAR and PHENIX are compared, we determine the freeze-out temperature in the limit of vanishing baryon chemical potential, and, for the first time, constrain the curvature of the freeze-out line through a direct comparison between experimental data on net-charge fluctuations and a QCD calculation. We obtain a bound on the curvature coefficient, κmore » $^f$$_2$<0.011, that is compatible with lattice QCD results on the curvature of the QCD transition line.« less
Seasonal cycle dependence of temperature fluctuations in the atmosphere. Master's thesis
Tobin, B.F.
1994-08-01
The correlation statistics of meteorological fields have been of interest in weather forecasting for many years and are also of interest in climate studies. A better understanding of the seasonal variation of correlation statistics can be used to determine how the seasonal cycle of temperature fluctuations should be simulated in noise-forced energy balance models. It is shown that the length scale does have a seasonal dependence and will have to be handled through the seasonal modulation of other coefficients in noise-forced energy balance models. The temperature field variance and spatial correlation fluctuations exhibit seasonality with fluctuation amplitudes larger in the winter hemisphere and over land masses. Another factor contributing to seasonal differences is the larger solar heating gradient in the winter.
Brandt, Charles A. ); Becker, James M. ); Porta, Augusto C.
2001-12-01
Following a large blowout of crude oil in northern Italy in 1994, the distribution of polyaromatic hydrocarbons (PAHs) was examined over time and space in soils, uncultivated wild vegetation, insects, mice, and frogs in the area. Within 2 y of the blowout, PAH concentrations declined to background levels over much of the area where initial concentrations were within an order of magnitude above background, but had not declined to background in areas where starting concentrations exceeded background by two orders of magnitude. Octanol-water partitioning and extent of alkylation explained much of the variance in uptake of PAHs by plants and animals. Lower Kow PAHs and higher-alkylated PAHs had higher soil-to-biota accumulation factors (BSAFs) than did high-Kow and unalkylated forms. BSAFs for higher Kow PAHs were very low for plants, but much higher for animals, with frogs accumulating more of these compounds than other species.
Studies of Cosmic Ray Composition and Air Shower Structure with the Pierre Auger Observatory
Abraham, : J.; Abreu, P.; Aglietta, M.; Aguirre, C.; Ahn, E.J.; Allard, D.; Allekotte, I.; Allen, J.; Alvarez-Muniz, J.; Ambrosio, M.; Anchordoqui, L.
2009-06-01
These are presentations to be presented at the 31st International Cosmic Ray Conference, in Lodz, Poland during July 2009. It consists of the following presentations: (1) Measurement of the average depth of shower maximum and its fluctuations with the Pierre Auger Observatory; (2) Study of the nuclear mass composition of UHECR with the surface detectors of the Pierre Auger Observatory; (3) Comparison of data from the Pierre Auger Observatory with predictions from air shower simulations: testing models of hadronic interactions; (4) A Monte Carlo exploration of methods to determine the UHECR composition with the Pierre Auger Observatory; (5) The delay of the start-time measured with the Pierre Auger Observatory for inclined showers and a comparison of its variance with models; (6) UHE neutrino signatures in the surface detector of the Pierre Auger Observatory; and (7) The electromagnetic component of inclined air showers at the Pierre Auger Observatory.
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Characterizing cemented TRU waste for RCRA hazardous constituents
Yeamans, D.R.; Betts, S.E.; Bodenstein, S.A. [and others
1996-06-01
Los Alamos National Laboratory (LANL) has characterized drums of solidified transuranic (TRU) waste from four major waste streams. The data will help the State of New Mexico determine whether or not to issue a no-migration variance of the Waste Isolation Pilot Plant (WIPP) so that WIPP can receive and dispose of waste. The need to characterize TRU waste stored at LANL is driven by two additional factors: (1) the LANL RCRA Waste Analysis Plan for EPA compliant safe storage of hazardous waste; (2) the WIPP Waste Acceptance Criteria (WAC) The LANL characterization program includes headspace gas analysis, radioassay and radiography for all drums and solids sampling on a random selection of drums from each waste stream. Data are presented showing that the only identified non-metal RCRA hazardous component of the waste is methanol.
Intrinsic fluctuations of dust grain charge in multi-component plasmas
Shotorban, B.
2014-03-15
A master equation is formulated to model the states of the grain charge in a general multi-component plasma, where there are electrons and various kinds of positive or negative ions that are singly or multiply charged. A Fokker-Planck equation is developed from the master equation through the system-size expansion method. The Fokker-Planck equation has a Gaussian solution with a mean and variance governed by two initial-value differential equations involving the rates of the attachment of ions and electrons to the dust grain. Also, a Langevin equation and a discrete stochastic method are developed to model the time variation of the grain charge. Grain charging in a plasma containing electrons, protons, and alpha particles with Maxwellian distributions is considered as an example problem. The Gaussian solution is in very good agreement with the master equation solution numerically obtained for this problem.
Kalman filter data assimilation: Targeting observations and parameter estimation
Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Mixing in thermally stratified nonlinear spin-up with uniform boundary fluxes
Baghdasarian, Meline; Pacheco-Vega, Arturo; Pacheco, J. Rafael; Verzicco, Roberto
2014-09-15
Studies of stratified spin-up experiments in enclosed cylinders have reported the presence of small pockets of well-mixed fluids but quantitative measurements of the mixedness of the fluid has been lacking. Previous numerical simulations have not addressed these measurements. Here we present numerical simulations that explain how the combined effect of spin-up and thermal boundary conditions enhances or hinders mixing of a fluid in a cylinder. The energy of the system is characterized by splitting the potential energy into diabatic and adiabatic components, and measurements of efficiency of mixing are based on both, the ratio of dissipation of available potential energy to forcing and variance of temperature. The numerical simulations of the NavierStokes equations for the problem with different sets of thermal boundary conditions at the horizontal walls helped shed some light on the physical mechanisms of mixing, for which a clear explanation was absent.
Foltz, Gregory R.; Balaguru, Karthik; Leung, Lai-Yung R.
2015-02-28
The impact of tropical cyclones on surface chlorophyll concentration is assessed in the western subtropical North Atlantic Ocean during 19982011. Previous studies in this area focused on individual cyclones and gave mixed results regarding the importance of tropical cyclone-induced mixing for changes in surface chlorophyll. Using a more integrated and comprehensive approach that includes quantification of cyclone-induced changes in mixed layer depth, here it is shown that accumulated cyclone energy explains 22% of the interannual variability in seasonally-averaged (JuneNovember) chlorophyll concentration in the western subtropical North Atlantic, after removing the influence of the North Atlantic Oscillation (NAO). The variance explained by tropical cyclones is thus about 70% of that explained by the NAO, which has well-known impacts in this region. It is therefore likely that tropical cyclones contribute significantly to interannual variations of primary productivity in the western subtropical North Atlantic during the hurricane season.
The effects of plasma inhomogeneity on the nanoparticle coating in a low pressure plasma reactor
Pourali, N.; Foroutan, G.
2015-10-15
A self-consistent model is used to study the surface coating of a collection of charged nanoparticles trapped in the sheath region of a low pressure plasma reactor. The model consists of multi-fluid plasma sheath module, including nanoparticle dynamics, as well as the surface deposition and particle heating modules. The simulation results show that the mean particle radius increases with time and the nanoparticle size distribution is broadened. The mean radius is a linear function of time, while the variance exhibits a quadratic dependence. The broadening in size distribution is attributed to the spatial inhomogeneity of the deposition rate which in turn depends on the plasma inhomogeneity. The spatial inhomogeneity of the ions has strong impact on the broadening of the size distribution, as the ions contribute both in the nanoparticle charging and in direct film deposition. The distribution width also increases with increasing of the pressure, gas temperature, and the ambient temperature gradient.
Mller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Features of MCNP6 Relevant to Medical Radiation Physics
Hughes, H. Grady III; Goorley, John T.
2012-08-29
MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-based isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.
Simulating the amorphization of [alpha]-quartz under pressure
Binggeli, N. , PHB-Ecublens, 1015 Lausanne ); Chelikowsky, J.R. ); Wentzcovitch, R.M. )
1994-04-01
Extensive molecular-dynamics simulations have been performed within a classical force-field model for the pressure-induced amorphization of quartz. In agreement with earlier molecular-dynamics studies, we find that a phase transition occurs within the experimental pressure range of the amorphization transformation. However, at variance with previous interpretations, we find that the resulting phase is not amorphous. The correlation functions of the equilibrated structure can be shown to be consistent with those of a crystalline phase. Two transformations to ordered structures occur sequentially during the simulations. The first transformation is likely to be related to the recently discovered transition of quartz to an intermediate crystalline phase before its amorphization. The second transformation, instead, yields a compact octahedrally coordinated Si sublattice. The latter structure may be an artifact of the classical force field.
Field testing of a high-temperature aquifer thermal energy storage system
Sterling, R.L.; Hoyer, M.C.
1989-03-01
The University of Minnesota Aquifer Thermal Energy Storage (ATES) System has been operated as a field test facility for the past six years. Four short-term and two long-term cycles have been completed to data providing a greatly increased understanding of the efficiency and geochemical effects of high-temperature aquifer thermal energy storage. A third long-term cycle is currently being planned to operate the ATES system in conjunction with a real heating load and to further study the geochemical impact on the aquifer from heated waste storage cycles. The most critical activities in the preparation for the next cycle have proved to be the applications for the various permits and variances necessary to conduct the third cycle and the matching of the characteristics of the ATES system during heat recovery with a suitable adjacent building thermal load.
Pilania, G.; Gubernatis, J. E.; Lookman, T.
2015-12-03
The role of dynamical (or Born effective) charges in classification of octet AB-type binary compounds between four-fold (zincblende/wurtzite crystal structures) and six-fold (rocksalt crystal structure) coordinated systems is discussed. We show that the difference in the dynamical charges of the fourfold and sixfold coordinated structures, in combination with Harrison’s polarity, serves as an excellent feature to classify the coordination of 82 sp–bonded binary octet compounds. We use a support vector machine classifier to estimate the average classification accuracy and the associated variance in our model where a decision boundary is learned in a supervised manner. Lastly, we compare the out-of-samplemore » classification accuracy achieved by our feature pair with those reported previously.« less
Dynamical mass generation in unquenched QED using the Dyson-Schwinger equations
Kızılersü, Ayse; Sizer, Tom; Pennington, Michael R.; Williams, Anthony G.; Williams, Richard
2015-03-13
We present a comprehensive numerical study of dynamical mass generation for unquenched QED in four dimensions, in the absence of four-fermion interactions, using the Dyson-Schwinger approach. We begin with an overview of previous investigations of criticality in the quenched approximation. To this we add an analysis using a new fermion-antifermion-boson interaction ansatz, the Kizilersu-Pennington (KP) vertex, developed for an unquenched treatment. After surveying criticality in previous unquenched studies, we investigate the performance of the KP vertex in dynamical mass generation using a renormalized fully unquenched system of equations. This we compare with the results for two hybrid vertices incorporating the Curtis-Pennington vertex in the fermion equation. We conclude that the KP vertex is as yet incomplete, and its relative gauge-variance is due to its lack of massive transverse components in its design.
McDowell, Allen K.; Ellefson, Mark D.; McDonald, Kent M.
2015-06-25
The treatment, shipping, and disposal of a highly radioactive radium/barium waste stream have presented a complex set of challenges requiring several years of effort. The project illustrates the difficulty and high cost of managing even small quantities of highly radioactive Resource Conservation and Recovery Act (RCRA)-regulated waste. Pacific Northwest National Laboratory (PNNL) research activities produced a Type B quantity of radium chloride low-level mixed waste (LLMW) in a number of small vials in a facility hot cell. The resulting waste management project involved a mock-up RCRA stabilization treatment, a failed in-cell treatment, a second, alternative RCRA treatment approach, coordinated regulatory variances and authorizations, alternative transportation authorizations, additional disposal facility approvals, and a final radiological stabilization process.
Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report
Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin
2014-12-01
This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled “Validated Models for Radiation Response and Signal Generation in Scintillators” (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).
Foo Kune, Denis; Mahadevan, Karthikeyan
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
Sub-Poissonian statistics in order-to-chaos transition
Kryuchkyan, Gagik Yu. [Yerevan State University, Manookyan 1, Yerevan 375049, (Armenia); Institute for Physical Research, National Academy of Sciences, Ashtarak-2 378410, (Armenia); Manvelyan, Suren B. [Institute for Physical Research, National Academy of Sciences, Ashtarak-2 378410, (Armenia)
2003-07-01
We study the phenomena at the overlap of quantum chaos and nonclassical statistics for the time-dependent model of nonlinear oscillator. It is shown in the framework of Mandel Q parameter and Wigner function that the statistics of oscillatory excitation numbers is drastically changed in the order-to-chaos transition. The essential improvement of sub-Poissonian statistics in comparison with an analogous one for the standard model of driven anharmonic oscillator is observed for the regular operational regime. It is shown that in the chaotic regime, the system exhibits the range of sub-Poissonian and super-Poissonian statistics which alternate one to other depending on time intervals. Unusual dependence of the variance of oscillatory number on the external noise level for the chaotic dynamics is observed. The scaling invariance of the quantum statistics is demonstrated and its relation to dissipation and decoherence is studied.
Schilling, Oleg; Mueschke, Nicholas J.
2010-10-18
Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmore » and destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic
Challenging the Mean Time to Failure: Measuring Dependability as a Mean Failure Cost
Sheldon, Frederick T; Mili, Ali
2009-01-01
many fronts: it ignores the variance in stakes among stakeholders; it fails to recognize the structure of complex specifications as the aggregate of overlapping requirements; it fails to recognize that different components of the specification carry different stakes, even for the same stakeholder; it fails to recognize that V&V actions have different impacts with respect to the different components of the specification. Similar metrics of security, such as MTTD (Mean Time to Detection) and MTTE (Mean Time to Exploitation) suffer from the same shortcomings. In this paper we advocate a measure of dependability that acknowledges the aggregate structureof complex system specifications, and takes into account variations by stakeholder, by specification components, and by V&V impact.
Ab initio molecular dynamics simulation of liquid water by quantum Monte Carlo
Zen, Andrea; Luo, Ye Mazzola, Guglielmo Sorella, Sandro; Guidoni, Leonardo
2015-04-14
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article, we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous density functional theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab initio simulations of complex chemical systems.
Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference
Beggs, W.J.
1981-02-01
This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; the analysis of variance; quality control procedures; and linear regression analysis.
MEASUREMENT OF THE SHOCK-HEATED MELT CURVE OF LEAD USING PYROMETRY AND REFLECTOMETRY
D. Partouche-Sebban and J. L. Pelissier, Commissariat a` l'Energie Atomique,; F. G. Abeyta, Los Alamos National Laboratory; W. W. Anderson, Los Alamos National Laboratory; M. E. Byers, Los Alamos National Laboratory; D. Dennis-Koller, Los Alamos National Laboratory; J. S. Esparza, Los Alamos National Laboratory; S. D. Borror, Bechtel Nevada; C. A. Kruschwitz, Bechtel Nevada
2004-01-01
Data on the high-pressure melting temperatures of metals is of great interest in several fields of physics including geophysics. Measuring melt curves is difficult but can be performed in static experiments (with laser-heated diamond-anvil cells for instance) or dynamically (i.e., using shock experiments). However, at the present time, both experimental and theoretical results for the melt curve of lead are at too much variance to be considered definitive. As a result, we decided to perform a series of shock experiments designed to provide a measurement of the melt curve of lead up to about 50 GPa in pressure. At the same time, we developed and fielded a new reflectivity diagnostic, using it to make measurements on tin. The results show that the melt curve of lead is somewhat higher than the one previously obtained with static compression and heating techniques.
De Donato, Cinzia; Sanchez, Federico; Santander, Marcos; Natl.Tech.U., San Rafael; Camin, Daniel; Garcia, Beatriz; Grassi, Valerio; /Milan U. /INFN, Milan
2005-05-01
To accurately reconstruct a shower axis from the Fluorescence Detector data it is essential to establish with high precision the absolute pointing of the telescopes. To d that they calculate the absolute pointing of a telescope using sky background data acquired during regular data taking periods. The method is based on the knowledge of bright star's coordinates that provide a reliable and stable coordinate system. it can be used to check the absolute telescope's pointing and its long-term stability during the whole life of the project, estimated in 20 years. They have analyzed background data taken from January to October 2004 to determine the absolute pointing of the 12 telescopes installed both in Los Leones and Coihueco. The method is based on the determination of the mean-time of the variance signal left by a star traversing a PMT's photocathode which is compared with the mean-time obtained by simulating the track of that star on the same pixel.
SU-F-18C-15: Model-Based Multiscale Noise Reduction On Low Dose Cone Beam Projection
Yao, W; Farr, J
2014-06-15
Purpose: To improve image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low dose cone beam CT (CBCT) imaging systems, Poisson process governs the randomness of photon fluence at x-ray source and the detector because of the independent binomial process of photon absorption in medium. On a CBCT projection, the variance of fluence consists of the variance of noiseless imaging structure and that of Poisson noise, which is proportional to the mean (noiseless) of the fluence at the detector. This requires multiscale filters to smoothen noise while keeping the structure information of the imaged object. We used a mathematical model of Poisson process to design multiscale filters and established the balance of noise correction and structure blurring. The algorithm was checked with low dose kilo-voltage CBCT projections acquired from a Varian OBI system. Results: From the investigation of low dose CBCT of a Catphan phantom and patients, it showed that our model-based multiscale technique could efficiently reduce noise and meanwhile keep the fine structure of the imaged object. After the image processing, the number of visible line pairs in Catphan phantom scanned with 4 ms pulse time was similar to that scanned with 32 ms, and soft tissue structure from simulated 4 ms patient head-and-neck images was also comparable with scanned 20 ms ones. Compared with fixed-scale technique, the image quality from multiscale one was improved. Conclusion: Use of projection-specific multiscale filters can reach better balance on noise reduction and structure information loss. The image quality of low dose CBCT can be improved by using multiscale filters.
Quality by design in the nuclear weapons complex
Ikle, D.N.
1988-04-01
Modern statistical quality control has evolved beyond the point at which control charts and sampling plans are sufficient to maintain a competitive position. The work of Genichi Taguchi in the early 1970's has inspired a renewed interest in the application of statistical methods of experimental design at the beginning of the manufacturing cycle. While there has been considerable debate over the merits of some of Taguchi's statistical methods, there is increasing agreement that his emphasis on cost and variance reduction is sound. The key point is that manufacturing processes can be optimized in development before they get to production by identifying a region in the process parameter space in which the variance of the process is minimized. Therefore, for performance characteristics having a convex loss function, total product cost is minimized without substantially increasing the cost of production. Numerous examples of the use of this approach in the United States and elsewhere are available in the literature. At the Rocky Flats Plant, where there are severe constraints on the resources available for development, a systematic development strategy has been developed to make efficient use of those resources to statistically characterize critical production processes before they are introduced into production. This strategy includes the sequential application of fractional factorial and response surface designs to model the features of critical processes as functions of both process parameters and production conditions. This strategy forms the basis for a comprehensive quality improvement program that emphasizes prevention of defects throughout the product cycle. It is currently being implemented on weapons programs in development at Rocky Flats and is in the process of being applied at other production facilities in the DOE weapons complex. 63 refs.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-coupled- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB
On the reliability of microvariability tests in quasars
De Diego, José A.
2014-11-01
Microvariations probe the physics and internal structure of quasars. Unpredictability and small flux variations make this phenomenon elusive and difficult to detect. Variance-based probes such as the C and F tests, or a combination of both, are popular methods to compare the light curves of the quasar and a comparison star. Recently, detection claims in some studies have depended on the agreement of the results of the C and F tests, or of two instances of the F-test, for rejecting the non-variation null hypothesis. However, the C-test is a non-reliable statistical procedure, the F-test is not robust, and the combination of tests with concurrent results is anything but a straightforward methodology. A priori power analysis calculations and post hoc analysis of Monte Carlo simulations show excellent agreement for the analysis of variance test to detect microvariations as well as the limitations of the F-test. Additionally, the combined tests yield correlated probabilities that make the assessment of statistical significance unworkable. However, it is possible to include data from several field stars to enhance the power in a single F-test, increasing the reliability of the statistical analysis. This would be the preferred methodology when several comparison stars are available. An example using two stars and the enhanced F-test is presented. These results show the importance of using adequate methodologies and avoiding inappropriate procedures that can jeopardize microvariability detections. Power analysis and Monte Carlo simulations are useful tools for research planning, as they can demonstrate the robustness and reliability of different research approaches.
Energy Science and Technology Software Center (OSTI)
2011-01-03
Bulk Data Mover (BDM) is a high-level data transfer management tool. BDM handles the issue of large variance in file sizes and a big portion of small files by managing the file transfers with optimized transfer queue and concurrency management algorithms. For example, climate simulation data sets are characterized by large volume of files with extreme variance in file sizes. The BDN achieves high performance using a variety of techniques, including multi-thraded concurrent transfer connections,more » data channel caching, load balancing over multiple transfer servers, and storage i/o pre-fetching. Logging information from the BDM is collected and analyzed to study the effectiveness of the transfer management algorithms. The BDM can accept a request composed of multiple files or an entire directory. The request also contains the target site and directory where the replicated files will reside. If a directory is provided at the source, then the BDM will replicate the structure of the source directory at the target site. The BDM is capable of transferring multiple files concurrently as well as using parallel TCP streams. The optimal level of concurrency or parallel streams depends on the bandwidth capacity of the storage systems at both ends of the transfer as well as achievable bandwidth of the wide-area network. Hardware req.-PC, MAC, Multi-platform & Workstation; Software req.: Compile/version-Java 1.50_x or ablove; Type of files: source code, executable modules, installation instructions other, user guide; URL: http://sdm.lbl.gov/bdm/« less
Constraining Cosmic Evolution of Type Ia Supernovae
Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.
2008-02-13
We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.
Probabilistic cost estimation methods for treatment of water extracted during CO_{2} storage and EOR
Graham, Enid J. Sullivan; Chu, Shaoping; Pawar, Rajesh J.
2015-08-08
Extraction and treatment of in situ water can minimize risk for large-scale CO_{2} injection in saline aquifers during carbon capture, utilization, and storage (CCUS), and for enhanced oil recovery (EOR). Additionally, treatment and reuse of oil and gas produced waters for hydraulic fracturing will conserve scarce fresh-water resources. Each treatment step, including transportation and waste disposal, generates economic and engineering challenges and risks; these steps should be factored into a comprehensive assessment. We expand the water treatment model (WTM) coupled within the sequestration system model CO_{2}-PENS and use chemistry data from seawater and proposed injection sites in Wyoming, to demonstrate the relative importance of different water types on costs, including little-studied effects of organic pretreatment and transportation. We compare the WTM with an engineering water treatment model, utilizing energy costs and transportation costs. Specific energy costs for treatment of Madison Formation brackish and saline base cases and for seawater compared closely between the two models, with moderate differences for scenarios incorporating energy recovery. Transportation costs corresponded for all but low flow scenarios (<5000 m^{3}/d). Some processes that have high costs (e.g., truck transportation) do not contribute the most variance to overall costs. Other factors, including feed-water temperature and water storage costs, are more significant contributors to variance. These results imply that the WTM can provide good estimates of treatment and related process costs (AACEI equivalent level 5, concept screening, or level 4, study or feasibility), and the complex relationships between processes when extracted waters are evaluated for use during CCUS and EOR site development.
SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS
Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bouwens, R.; Cattaneo, A.; Croton, D.; Dave, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.; and others
2013-05-20
The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg{sup 2} to a depth of 26 AB mag (3{sigma}) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 {mu}m. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 {+-} 1.0 and 4.4 {+-} 0.8 nW m{sup -2} sr{sup -1} at 3.6 and 4.5 {mu}m to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.
Investigation of advanced UQ for CRUD prediction with VIPRE.
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinement for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and
Ensslin, Torsten A.; Frommert, Mona [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.
TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT
Niu, T; Dong, X; Petrongolo, M; Zhu, L
2014-06-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative
Neill, P. H.; Given, P. H.
1984-09-01
The initial aim of this research was to use empirical mathematical relationships to formulate a better understanding of the processes involved in the liquefaction of a set of medium rank high sulfur coals. In all, just over 50 structural parameters and yields of product classes were determined. In order to gain a more complete understanding of the empirical relationships between the various properties, a number of relatively complex statistical procedures and tests were applied to the data, mostly selected from the field of multivariate analysis. These can be broken down into two groups. The first group included grouping techniques such as non-linear mapping, hierarchical and tree clustering, and linear discriminant analyses. These techniques were utilized in determining if more than one statistical population was present in the data set; it was concluded that there was not. The second group of techniques included factor analysis and stepwise multivariate linear regressions. Linear discriminant analyses were able to show that five distinct groups of coals were represented in the data set. However only seven of the properties seemed to follow this trend. The chemical property that appeared to follow the trend most closely was the aromaticity, where a series of five parallel straight lines was observed for a plot of f/sub a/ versus carbon content. The factor patterns for each of the product classes indicated that although each of the individual product classes tended to load on factors defined by specific chemical properties, the yields of the broader product classes, such as total conversion to liquids + gases and conversion to asphaltenes, tended to load largely on factors defined by rank. The variance explained and the communalities tended to be relatively low. Evidently important sources of variance have still to be found.
Vickers, D.; Thomas, C.
2014-05-13
Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore » momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less
Coordinating Garbage Collection for Arrays of Solid-state Drives
Kim, Youngjae; Lee, Junghee; Oral, H Sarp; Dillow, David A; Wang, Feiyi; Shipman, Galen M
2014-01-01
Although solid-state drives (SSDs) offer significant performance improvements over hard disk drives (HDDs) for a number of workloads, they can exhibit substantial variance in request latency and throughput as a result of garbage collection (GC). When GC conflicts with an I/O stream, the stream can make no forward progress until the GC cycle completes. GC cycles are scheduled by logic internal to the SSD based on several factors such as the pattern, frequency, and volume of write requests. When SSDs are used in a RAID with currently available technology, the lack of coordination of the SSD-local GC cycles amplifies this performance variance. We propose a global garbage collection (GGC) mechanism to improve response times and reduce performance variability for a RAID of SSDs. We include a high-level design of SSD-aware RAID controller and GGC-capable SSD devices and algorithms to coordinate the GGC cycles. We develop reactive and proactive GC coordination algorithms and evaluate their I/O performance and block erase counts for various workloads. Our simulations show that GC coordination by a reactive scheme improves average response time and reduces performance variability for a wide variety of enterprise workloads. For bursty, write-dominated workloads, response time was improved by 69% and performance variability was reduced by 71%. We show that a proactive GC coordination algorithm can further improve the I/O response times by up to 9% and the performance variability by up to 15%. We also observe that it could increase the lifetimes of SSDs with some workloads (e.g. Financial) by reducing the number of block erase counts by up to 79% relative to a reactive algorithm for write-dominant enterprise workloads.
Probabilistic cost estimation methods for treatment of water extracted during CO2 storage and EOR
Graham, Enid J. Sullivan; Chu, Shaoping; Pawar, Rajesh J.
2015-08-08
Extraction and treatment of in situ water can minimize risk for large-scale CO2 injection in saline aquifers during carbon capture, utilization, and storage (CCUS), and for enhanced oil recovery (EOR). Additionally, treatment and reuse of oil and gas produced waters for hydraulic fracturing will conserve scarce fresh-water resources. Each treatment step, including transportation and waste disposal, generates economic and engineering challenges and risks; these steps should be factored into a comprehensive assessment. We expand the water treatment model (WTM) coupled within the sequestration system model CO2-PENS and use chemistry data from seawater and proposed injection sites in Wyoming, to demonstratemore » the relative importance of different water types on costs, including little-studied effects of organic pretreatment and transportation. We compare the WTM with an engineering water treatment model, utilizing energy costs and transportation costs. Specific energy costs for treatment of Madison Formation brackish and saline base cases and for seawater compared closely between the two models, with moderate differences for scenarios incorporating energy recovery. Transportation costs corresponded for all but low flow scenarios (<5000 m3/d). Some processes that have high costs (e.g., truck transportation) do not contribute the most variance to overall costs. Other factors, including feed-water temperature and water storage costs, are more significant contributors to variance. These results imply that the WTM can provide good estimates of treatment and related process costs (AACEI equivalent level 5, concept screening, or level 4, study or feasibility), and the complex relationships between processes when extracted waters are evaluated for use during CCUS and EOR site development.« less
Transit light curves with finite integration time: Fisher information analysis
Price, Ellen M.; Rogers, Leslie A.
2014-10-10
Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/∼eprice.
Grant, C.W.; Goggin, D.J.; Harris, P.M. )
1994-01-01
Vertical and horizontal transects were sampled from core and outcrop of the San Andres Formation at Lawyer Canyon, Guadalupe Mountains, New Mexico, to assess permeability variation in a geologic framework of upward-shallowing carbonate cycles and to show the potential effect these variations have on viscous-dominated flow behavior in analogous reservoirs. These cycles occur in a ramp-crest facies, tract, are 3-13 m (10-45 ft) thick, and contain both vertical and lateral variation of lithofacies. Thicker cycles consist of a basal dolomudstone, which is overlain by burrowed dolomudstone, and capped by bar-flank ooid-peloid dolograinstone and bar-crest ooid dolograinstones. In vertical transects, permeability is extremely variable about the mean, yet upward-increasing trends coinciding with the succession of lithofacies typify a given cycle. Semi-variance analysis shows permeability to be uncorrelated vertically at distances greater than 5.5 m (18 ft), which is the average cycle thickness, suggesting that the cycles may equate to fluid-flow unit in a reservoir. Semi-variance analysis of measurements collected along a horizontal transect within bar-crest dolograinstones of a single cycle show permeability is uncorrelated at distances greater than 3.6 m (12 ft). This correlation distance appears to be controlled by alternating porous and tightly cemented zones that formed during dolomitization. Vertical and lateral variogram models were fit to the spatial parameters to generate a variety of conditionally simulated permeability fields. Fluid-flow simulations show viscous-dominated flow behavior is compartmentalized by both the individual cycles and groups of cycles. The basal dolomudstones are potential baffles to flow crossover between cycles, but poorly developed cycles (i.e., those that are mud rich and lack well-developed bar-flank and bar-crest facies) result in the greatest compartmentalization of fluid flow within a succession of cycles.
Hamano, Satoshi; Kobayashi, Naoto [Institute of Astronomy, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Kondo, Sohei [Koyama Astronomical Observatory, Kyoto-Sangyo University, Motoyama, Kamigamo, Kita-Ku, Kyoto 603-8555 (Japan); Tsujimoto, Takuji [National Astronomical Observatory of Japan and Department of Astronomical Science, Graduate University for Advanced Studies, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Okoshi, Katsuya [Faculty of Industrial Science and Technology, Tokyo University of Science, 102-1 Tomino, Oshamanbe, Hokkaido 049-3514 (Japan); Shigeyama, Toshikazu, E-mail: hamano@ioa.s.u-tokyo.ac.jp [Research Center for the Early Universe, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan)
2012-08-01
Using the Subaru 8.2 m Telescope with the IRCS Echelle spectrograph, we obtained high-resolution (R = 10,000) near-infrared (1.01-1.38 {mu}m) spectra of images A and B of the gravitationally lensed QSO B1422+231 (z = 3.628) consisting of four known lensed images. We detected Mg II absorption lines at z = 3.54, which show a large variance of column densities ({approx}0.3 dex) and velocities ({approx}10 km s{sup -1}) between sightlines A and B with a projected separation of only 8.4h{sup -1}{sub 70} pc at that redshift. This is the smallest spatial structure of the high-z gas clouds ever detected after Rauch et al. found a 20 pc scale structure for the same z = 3.54 absorption system using optical spectra of images A and C. The observed systematic variances imply that the system is an expanding shell as originally suggested by Rauch et al. By combining the data for three sightlines, we managed to constrain the radius and expansion velocity of the shell ({approx}50-100 pc, 130 km s{sup -1}), concluding that the shell is truly a supernova remnant (SNR) rather than other types of shell objects, such as a giant H II region. We also detected strong Fe II absorption lines for this system, but with much broader Doppler width than that of {alpha}-element lines. We suggest that this Fe II absorption line originates in a localized Fe II-rich gas cloud that is not completely mixed with plowed ambient interstellar gas clouds showing other {alpha}-element low-ion absorption lines. Along with the Fe richness, we conclude that the SNR is produced by an SN Ia explosion.
A stochastic extension of the explicit algebraic subgrid-scale models
Rasam, A. Brethouwer, G.; Johansson, A. V.
2014-05-15
The explicit algebraic subgrid-scale (SGS) stress model (EASM) of Marstorp et al. [Explicit algebraic subgrid stress models with application to rotating channel flow, J. Fluid Mech. 639, 403432 (2009)] and explicit algebraic SGS scalar flux model (EASFM) of Rasam et al. [An explicit algebraic model for the subgrid-scale passive scalar flux, J. Fluid Mech. 721, 541577 (2013)] are extended with stochastic terms based on the Langevin equation formalism for the subgrid-scales by Marstorp et al. [A stochastic subgrid model with application to turbulent flow and scalar mixing, Phys. Fluids 19, 035107 (2007)]. The EASM and EASFM are nonlinear mixed and tensor eddy-diffusivity models, which improve large eddy simulation (LES) predictions of the mean flow, Reynolds stresses, and scalar fluxes of wall-bounded flows compared to isotropic eddy-viscosity and eddy-diffusivity SGS models, especially at coarse resolutions. The purpose of the stochastic extension of the explicit algebraic SGS models is to further improve the characteristics of the kinetic energy and scalar variance SGS dissipation, which are key quantities that govern the small-scale mixing and dispersion dynamics. LES of turbulent channel flow with passive scalar transport shows that the stochastic terms enhance SGS dissipation statistics such as length scale, variance, and probability density functions and introduce a significant amount of backscatter of energy from the subgrid to the resolved scales without causing numerical stability problems. The improvements in the SGS dissipation predictions in turn enhances the predicted resolved statistics such as the mean scalar, scalar fluxes, Reynolds stresses, and correlation lengths. Moreover, the nonalignment between the SGS stress and resolved strain-rate tensors predicted by the EASM with stochastic extension is in much closer agreement with direct numerical simulation data.
Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation
Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael; Ramuhalli, Pradeep; Jeffries, Brien; Nam, Alan; Strong, Eric; Tong, Matthew; Welz, Zachary; Barbieri, Federico; Langford, Seth; Meinweiser, Gregory; Weeks, Matthew
2014-11-06
RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the
Iterative image-domain decomposition for dual-energy CT
Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the
Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.
2009-12-17
number of zeros. Using QQ plots these data characteristics show a lack of normality from the data after contamination. Normality is improved when looking at log(CFU/cm2). Variance component analysis (VCA) and analysis of variance (ANOVA) were used to estimate the amount of variance due to each source and to determine which sources of variability were statistically significant. In general, the sampling methods interacted with the across event variability and with the across room variability. For this reason, it was decided to do analyses for each sampling method, individually. The between event variability and between room variability were significant for each method, except for the between event variability for the swabs. For both the wipes and vacuums, the within room standard deviation was much larger (26.9 for wipes and 7.086 for vacuums) than the between event standard deviation (6.552 for wipes and 1.348 for vacuums) and the between room standard deviation (6.783 for wipes and 1.040 for vacuums). Swabs between room standard deviation was 0.151, while both the within room and between event standard deviations are less than 0.10 (all measurements in CFU/cm2).
Strain-dependent Damage in Mouse Lung After Carbon Ion Irradiation
Moritake, Takashi; Proton Medical Research Center, University of Tsukuba, Tsukuba ; Fujita, Hidetoshi; Yanagisawa, Mitsuru; Nakawatari, Miyako; Imadome, Kaori; Nakamura, Etsuko; Iwakawa, Mayumi; Imai, Takashi
2012-09-01
Purpose: To examine whether inherent factors produce differences in lung morbidity in response to carbon ion (C-ion) irradiation, and to identify the molecules that have a key role in strain-dependent adverse effects in the lung. Methods and Materials: Three strains of female mice (C3H/He Slc, C57BL/6J Jms Slc, and A/J Jms Slc) were locally irradiated in the thorax with either C-ion beams (290 MeV/n, in 6 cm spread-out Bragg peak) or with {sup 137}Cs {gamma}-rays as a reference beam. We performed survival assays and histologic examination of the lung with hematoxylin-eosin and Masson's trichrome staining. In addition, we performed immunohistochemical staining for hyaluronic acid (HA), CD44, and Mac3 and assayed for gene expression. Results: The survival data in mice showed a between-strain variance after C-ion irradiation with 10 Gy. The median survival time of C3H/He was significantly shortened after C-ion irradiation at the higher dose of 12.5 Gy. Histologic examination revealed early-phase hemorrhagic pneumonitis in C3H/He and late-phase focal fibrotic lesions in C57BL/6J after C-ion irradiation with 10 Gy. Pleural effusion was apparent in C57BL/6J and A/J mice, 168 days after C-ion irradiation with 10 Gy. Microarray analysis of irradiated lung tissue in the three mouse strains identified differential expression changes in growth differentiation factor 15 (Gdf15), which regulates macrophage function, and hyaluronan synthase 1 (Has1), which plays a role in HA metabolism. Immunohistochemistry showed that the number of CD44-positive cells, a surrogate marker for HA accumulation, and Mac3-positive cells, a marker for macrophage infiltration in irradiated lung, varied significantly among the three mouse strains during the early phase. Conclusions: This study demonstrated a strain-dependent differential response in mice to C-ion thoracic irradiation. Our findings identified candidate molecules that could be implicated in the between-strain variance to early
Combining weak-lensing tomography and spectroscopic redshift surveys
Cai, Yan -Chuan; Bernstein, Gary
2012-05-11
Redshift space distortion (RSD) is a powerful way of measuring the growth of structure and testing General Relativity, but it is limited by cosmic variance and the degeneracy between galaxy bias b and the growth rate factor f. The cross-correlation of lensing shear with the galaxy density field can in principle measure b in a manner free from cosmic variance limits, breaking the f-b degeneracy and allowing inference of the matter power spectrum from the galaxy survey. We analyze the growth constraints from a realistic tomographic weak lensing photo-z survey combined with a spectroscopic galaxy redshift survey over the same sky area. For sky coverage f_{sky} = 0.5, analysis of the transverse modes measures b to 2-3% accuracy per Δz = 0.1 bin at z < 1 when ~10 galaxies arcmin^{–2} are measured in the lensing survey and all halos with M > M_{min} = 10^{13}h^{–1}M_{⊙} have spectra. For the gravitational growth parameter parameter γ (f = Ω^{γ}_{m}), combining the lensing information with RSD analysis of non-transverse modes yields accuracy σ(γ) ≈ 0.01. Adding lensing information to the RSD survey improves \\sigma(\\gamma) by an amount equivalent to a 3x (10x) increase in RSD survey area when the spectroscopic survey extends down to halo mass 10^{13.5} (10^{14}) h^{–1} M_{⊙}. We also find that the σ(γ) of overlapping surveys is equivalent to that of surveys 1.5-2 times larger if they are separated on the sky. This gain is greatest when the spectroscopic mass threshold is 10^{13} -10^{14} h^{–1} M_{⊙}, similar to LRG surveys. The gain of overlapping surveys is reduced for very deep or very shallow spectroscopic surveys, but any practical surveys are more powerful when overlapped than when separated. As a result, the gain of overlapped surveys is larger in the case when the primordial power spectrum normalization is
SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats
Jacob, R; Carson, J
2014-06-15
Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6) or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.
Park, Sungsu
2014-12-12
The main goal of this project is to systematically quantify the major uncertainties of aerosol indirect effects due to the treatment of moist turbulent processes that drive aerosol activation, cloud macrophysics and microphysics in response to anthropogenic aerosol perturbations using the CAM5/CESM1. To achieve this goal, the P.I. hired a postdoctoral research scientist (Dr. Anna Fitch) who started her work from the Nov.1st.2012. In order to achieve the project goal, the first task that the Postdoc. and the P.I. did was to quantify the role of subgrid vertical velocity variance on the activation and nucleation of cloud liquid droplets and ice crystals and its impact on the aerosol indirect effect in CAM5. First, we analyzed various LES cases (from dry stable to cloud-topped PBL) to check whether this isotropic turbulence assumption used in CAM5 is really valid. It turned out that this isotropic turbulence assumption is not universally valid. Consequently, from the analysis of LES, we derived an empirical formulation relaxing the isotropic turbulence assumption used for the CAM5 aerosol activation and ice nucleation, and implemented the empirical formulation into CAM5/CESM1, and tested in the single-column and global simulation modes, and examined how it changed aerosol indirect effects in the CAM5/CESM1. These results were reported in the poster section in the 18th Annual CESM workshop held in Breckenridge, CO during Jun.17-20.2013. While we derived an empirical formulation from the analysis of couple of LES from the first task, the general applicability of that empirical formulation was questionable, because it was obtained from the limited number of LES simulations. The second task we did was to derive a more fundamental analytical formulation relating vertical velocity variance to TKE using other information starting from basic physical principles. This was a somewhat challenging subject, but if this could be done in a successful way, it could be directly
Kleinman, L.; Kuang, C.; Sedlacek, A.; Senum, G.; Springston, S.; Wang, J.; Zhang, Q.; Jayne, J.; Fast, J.; Hubbe, J.; et al
2015-09-17
During the Carbonaceous Aerosols and Radiative Effects Study (CARES) the DOE G-1 aircraft was used to sample aerosol and gas phase compounds in the Sacramento, CA plume and surrounding region. We present data from 66 plume transects obtained during 13 flights in which southwesterly winds transported the plume towards the foothills of the Sierra Nevada Mountains. Plume transport occurred partly over land with high isoprene emission rates. Our objective is to empirically determine whether organic aerosol (OA) can be attributed to anthropogenic or biogenic sources, and to determine whether there is a synergistic effect whereby OA concentrations are enhanced bymorethe simultaneous presence of high concentrations of CO and either isoprene, MVK+MACR (sum of methyl vinyl ketone and methacrolein) or methanol, which are taken as tracers of anthropogenic and biogenic emissions, respectively. Linear and bilinear correlations between OA, CO, and each of three biogenic tracers, "Bio", for individual plume transects indicate that most of the variance in OA over short time and distance scales can be explained by CO. For each transect and species a plume perturbation, (i.e., ?OA, defined as the difference between 90th and 10th percentiles) was defined and regressions done amongst ? values in order to probe day to day and location dependent variability. Species that predicted the largest fraction of the variance in ?OA were ?O3 and ?CO. Background OA was highly correlated with background methanol and poorly correlated with other tracers. Because background OA was ~ 60 % of peak OA in the urban plume, peak OA should be primarily biogenic and therefore non-fossil. Transects were split into subsets according to the percentile rankings of ?CO and ?Bio, similar to an approach used by Setyan et al. (2012) and Shilling et al. (2013) to determine if anthropogenic-biogenic interactions enhance OA production. As found earlier, ?OA in the data subset having high ?CO and high ?Bio was
Laugeman, E; Weiss, E; Chen, S; Hugo, G; Rosu, M
2014-06-01
Purpose: Evaluate and compare the cycle-to-cycle consistency of breathing patterns and their reproducibility over the course of treatment, for supine and prone positioning. Methods: Respiratory traces from 25 patients were recorded for sequential supine/prone 4DCT scans acquired prior to treatment, and during the course of the treatment (weekly or bi-weekly). For each breathing cycle, the average(AVE), end-of-exhale(EoE) and end-of-inhale( EoI) locations were identified using in-house developed software. In addition, the mean values and variations for the above quantities were computed for each breathing trace. F-tests were used to compare the cycle-to-cycle consistency of all pairs of sequential supine and prone scans. Analysis of variances was also performed using population means for AVE, EoE and EoI to quantify differences between the reproducibility of prone and supine respiration traces over the treatment course. Results: Consistency: Cycle-to-cycle variations are less in prone than supine in the pre-treatment and during-treatment scans for AVE, EoE and EoI points, for the majority of patients (differences significant at p<0.05). The few cases where the respiratory pattern had more variability in prone appeared to be random events. Reproducibility: The reproducibility of breathing patterns (supine and prone) improved as treatment progressed, perhaps due to patients becoming more comfortable with the procedure. However, variability in supine position continued to remain significantly larger than in prone (p<0.05), as indicated by the variance analysis of population means for the pretreatment and subsequent during-treatment scans. Conclusions: Prone positioning stabilizes breathing patterns in most subjects investigated in this study. Importantly, a parallel analysis of the same group of patients revealed a tendency towards increasing motion amplitude of tumor targets in prone position regardless of their size or location; thus, the choice for body positioning
Groundwater Monitoring Plan for the Hanford Site 216-B-3 Pond RCRA Facility
Barnett, D BRENT.; Smith, Ronald M.; Chou, Charissa J.; McDonald, John P.
2005-11-01
The 216-B-3 Pond system was a series of ponds used for disposal of liquid effluent from past Hanford production facilities. In operation from 1945 to 1997, the B Pond System has been a Resource Conservation and Recovery Act (RCRA) facility since 1986, with RCRA interim-status groundwater monitoring in place since 1988. In 1994 the expansion ponds of the facility were clean closed, leaving only the main pond and a portion of the 216-B-3-3 ditch as the currently regulated facility. In 2001, the Washington State Department of Ecology (Ecology) issued a letter providing guidance for a two-year, trial evaluation of an alternate, intrawell statistical approach to contaminant detection monitoring at the B Pond system. This temporary variance was allowed because the standard indicator-parameters evaluation (pH, specific conductance, total organic carbon, and total organic halides) and accompanying interim status statistical approach is ineffective for detecting potential B-Pond-derived contaminants in groundwater, primarily because this method fails to account for variability in the background data and because B Pond leachate is not expected to affect the indicator parameters. In July 2003, the final samples were collected for the two-year variance period. An evaluation of the results of the alternate statistical approach is currently in progress. While Ecology evaluates the efficacy of the alternate approach (and/or until B Pond is incorporated into the Hanford Facility RCRA Permit), the B Pond system will return to contamination-indicator detection monitoring. Total organic carbon and total organic halides were added to the constituent list beginning with the January 2004 samples. Under this plan, the following wells will be monitored for B Pond: 699-42-42B, 699-43-44, 699-43-45, and 699-44-39B. The wells will be sampled semi-annually for the contamination indicator parameters (pH, specific conductance, total organic carbon, and total organic halides) and annually for
Guest, Geoffrey Bright, Ryan M. Cherubini, Francesco Strmman, Anders H.
2013-11-15
Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxes in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily ? 1 CO{sub 2}eq per kg CO{sub 2} stored. As an example, when biogenic CO{sub 2} from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of ? 0.56 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of
Multienergy CT acquisition and reconstruction with a stepped tube potential scan
Shen, Le; Xing, Yuxiang
2015-01-15
Purpose: Based on an energy-dependent property of matter, one may obtain a pseudomonochromatic attenuation map, a material composition image, an electron-density distribution, and an atomic number image using a dual- or multienergy computed tomography (CT) scan. Dual- and multienergy CT scans broaden the potential of x-ray CT imaging. The development of such systems is very useful in both medical and industrial investigations. In this paper, the authors propose a new dual- and multienergy CT system design (segmental multienergy CT, SegMECT) using an innovative scanning scheme that is conveniently implemented on a conventional single-energy CT system. The two-step-energy dual-energy CT can be regarded as a special case of SegMECT. A special reconstruction method is proposed to support SegMECT. Methods: In their SegMECT, a circular trajectory in a CT scan is angularly divided into several arcs. The x-ray source is set to a different tube voltage for each arc of the trajectory. Thus, the authors only need to make a few step changes to the x-ray energy during the scan to complete a multienergy data acquisition. With such a data set, the image reconstruction might suffer from severe limited-angle artifacts if using conventional reconstruction methods. To solve the problem, they present a new prior-image-based reconstruction technique using a total variance norm of a quotient image constraint. On the one hand, the prior extracts structural information from all of the projection data. On the other hand, the effect from a possibly imprecise intensity level of the prior can be mitigated by minimizing the total variance of a quotient image. Results: The authors present a new scheme for a SegMECT configuration and establish a reconstruction method for such a system. Both numerical simulation and a practical phantom experiment are conducted to validate the proposed reconstruction method and the effectiveness of the system design. The results demonstrate that the proposed Seg
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J
2014-06-15
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain
URBAN WOOD/COAL CO-FIRING IN THE BELLEFIELD BOILERPLANT
James T. Cobb Jr.; Gene E. Geiger; William W. Elder III; William P. Barry; Jun Wang; Hongming Li
2004-04-08
An Environmental Questionnaire for the demonstration at the Bellefield Boiler Plant (BBP) was submitted to the national Energy Technology Laboratory. An R&D variance for the air permit at the BBP was sought from the Allegheny County Health Department (ACHD). R&D variances for the solid waste permits at the J. A. Rutter Company (JARC), and Emery Tree Service (ETS) were sought from the Pennsylvania Department of Environmental Protection (PADEP). Construction wood was acquired from Thompson Properties and Seven D Corporation. Verbal authorizations were received in all cases. Memoranda of understanding were executed by the University of Pittsburgh with BBP, JARC and ETS. Construction wood was collected from Thompson Properties and from Seven D Corporation. Forty tons of pallet and construction wood were ground to produce BioGrind Wood Chips at JARC and delivered to Mon Valley Transportation Company (MVTC). Five tons of construction wood were hammer milled at ETS and half of the product delivered to MVTC. Blends of wood and coal, produced at MVTC by staff of JARC and MVTC, were shipped by rail to BBP. The experimental portion of the project was carried out at BBP in late March and early April 2001. Several preliminary tests were successfully conducted using blends of 20% and 33% wood by volume. Four one-day tests using a blend of 40% wood by volume were then carried out. Problems of feeding and slagging were experienced with the 40% blend. Light-colored fly ash was observed coming from the stack during all four tests. Emissions of SO{sub 2}, NOx and total particulates, measured by Energy Systems Associates, decreased when compared with combusting coal alone. A procedure for calculating material and energy balances on BBP's Boiler No.1 was developed, using the results of an earlier compliance test at the plant. Material and energy balances were then calculated for the four test periods. Boiler efficiency was found to decrease slightly when the fuel was shifted from coal
What Is the Largest Einstein Radius in the Universe?
Oguri, Masamune; Blandford, Roger D.
2008-08-05
The Einstein radius plays a central role in lens studies as it characterizes the strength of gravitational lensing. In particular, the distribution of Einstein radii near the upper cutoff should probe the probability distribution of the largest mass concentrations in the universe. Adopting a triaxial halo model, we compute expected distributions of large Einstein radii. To assess the cosmic variance, we generate a number of Monte-Carlo realizations of all-sky catalogues of massive clusters. We find that the expected largest Einstein radius in the universe is sensitive to parameters characterizing the cosmological model, especially {sigma}{sub s}: for a source redshift of unity, they are 42{sub -7}{sup +9}, 35{sub -6}{sup +8}, and 54{sub -7}{sup +12} arcseconds (errors denote 1{sigma} cosmic variance), assuming best-fit cosmological parameters of the Wilkinson Microwave Anisotropy Probe five-year (WMAP5), three-year (WMAP3) and one-year (WMAP1) data, respectively. These values are broadly consistent with current observations given their incompleteness. The mass of the largest lens cluster can be as small as {approx} 10{sup 15} M{sub {circle_dot}}. For the same source redshift, we expect in all-sky {approx} 35 (WMAP5), {approx} 15 (WMAP3), and {approx} 150 (WMAP1) clusters that have Einstein radii larger than 2000. For a larger source redshift of 7, the largest Einstein radii grow approximately twice as large. While the values of the largest Einstein radii are almost unaffected by the level of the primordial non-Gaussianity currently of interest, the measurement of the abundance of moderately large lens clusters should probe non-Gaussianity competitively with cosmic microwave background experiments, but only if other cosmological parameters are well-measured. These semi-analytic predictions are based on a rather simple representation of clusters, and hence calibrating them with N-body simulations will help to improve the accuracy. We also find that these 'superlens
PHOTOMETRIC PROPERTIES OF Ly{alpha} EMITTERS AT z {approx} 4.86 IN THE COSMOS 2 SQUARE DEGREE FIELD
Shioya, Y.; Taniguchi, Y.; Nagao, T.; Saito, T.; Trump, J.; Sasaki, S. S.; Ideue, Y.; Nakajima, A.; Matsuoka, K.; Murayama, T.; Scoville, N. Z.; Capak, P.; Ellis, R. S.; Sanders, D. B.; Kartaltepe, J.; Mobasher, B.; Aussel, H.; Koekemoer, A.; Carilli, C.; Garilli, B.
2009-05-01
We present results of a survey for Ly{alpha} emitters at z {approx} 4.86 based on optical narrowband ({lambda} {sub c} = 7126 A, {delta}{lambda} = 73 A) and broadband (B, V, r', i', and z') observations of the Cosmic Evolution Survey field using Suprime-Cam on the Subaru Telescope. We find 79 Ly{alpha} emitter (LAE) candidates at z {approx} 4.86 over a contiguous survey area of 1.83 deg{sup 2}, down to the Ly{alpha} line flux of 1.47 x 10{sup -17} erg s{sup -1} cm{sup -2}. We obtain the Ly{alpha} luminosity function with a best-fit Schechter parameters of log L* = 42.9{sup +0.5} {sub -0.3} erg s{sup -1} and {phi}* = 1.2{sup +8.0} {sub -1.1} x 10{sup -4} Mpc{sup -3} for {alpha} = -1.5 (fixed). The two-point correlation function for our LAE sample is {xi}(r) = (r/4.4{sup +5.7} {sub -2.9} Mpc){sup -1.90{+-}}{sup 0.22}. In order to investigate the field-to-field variations of the properties of Ly{alpha} emitters, we divide the survey area into nine tiles of 0.{sup 0}5 x 0.{sup 0}5 each. We find that the number density varies with a factor of {approx_equal}2 from field to field with high statistical significance. However, we find no significant field-to-field variance when we divide the field into four tiles with 0.{sup 0}7 x 0.{sup 0}7 each. We conclude that at least 0.5 deg{sup 2} survey area is required to derive averaged properties of LAEs at z {approx} 5, and our survey field is wide enough to overcome the cosmic variance.
Sisterson, D. L.
2009-01-15
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, they calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The US Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1-(ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208), and for the Tropical Western Pacific (TWP) locale is 1,876.80 hours (0.85 x 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because the data have not yet been released from China to the DMF for processing. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is
Lu, Guoping; Zheng, Chunmiao; Wolfsberg, Andrew
2002-01-05
A Monte Carlo analysis was conducted to investigate the effect of uncertain hydraulic conductivity on the fate and transport of BTEX compounds (benzene, toluene, ethyl benzene, and xylene) at a field site on Hill Air Force Base, Utah. Microbially mediated BTEX degradation has occurred at the site through multiple terminal electron-accepting processes, including aerobic respiration, denitrification, Fe(III) reduction, sulfate reduction, and methanogenesis degradation. Multiple realizations of the hydraulic conductivity field were generated and substituted into a multispecies reactive transport model developed and calibrated for the Hill AFB site in a previous study. Simulation results show that the calculated total BTEX masses (released from a constant-concentration source) that remain in the aquifer at the end of the simulation period statistically follow a lognormal distribution. In the first analysis (base case), the calculated total BTEX mass varies from a minimum of 12% less and a maximum of 60% more than that of the previously calibrated model. This suggests that the uncertainty in hydraulic conductivity can lead to significant uncertainties in modeling the fate and transport of BTEX. Geometric analyses of calculated plume configurations show that a higher BTEX mass is associated with wider lateral spreading, while a lower mass is associated with longer longitudinal extension. More BTEX mass in the aquifer causes either a large depletion of dissolved oxygen (DO) and NO{sub 3}{sup -}, or a large depletion of DO and a large production of Fe{sup 2+}, with moderately depleted NO{sub 3}{sup -}. In an additional analysis, the effect of varying degrees of aquifer heterogeneity and associated uncertainty is examined by considering hydraulic conductivity with different variances and correlation lengths. An increase in variance leads to a higher average BTEX mass in the aquifer, while an increase in correlation length results in a lower average. This observation is
Combining weak-lensing tomography and spectroscopic redshift surveys
Cai, Yan -Chuan; Bernstein, Gary
2012-05-11
Redshift space distortion (RSD) is a powerful way of measuring the growth of structure and testing General Relativity, but it is limited by cosmic variance and the degeneracy between galaxy bias b and the growth rate factor f. The cross-correlation of lensing shear with the galaxy density field can in principle measure b in a manner free from cosmic variance limits, breaking the f-b degeneracy and allowing inference of the matter power spectrum from the galaxy survey. We analyze the growth constraints from a realistic tomographic weak lensing photo-z survey combined with a spectroscopic galaxy redshift survey over the samemore » sky area. For sky coverage fsky = 0.5, analysis of the transverse modes measures b to 2-3% accuracy per Δz = 0.1 bin at z < 1 when ~10 galaxies arcmin–2 are measured in the lensing survey and all halos with M > Mmin = 1013h–1M⊙ have spectra. For the gravitational growth parameter parameter γ (f = Ωγm), combining the lensing information with RSD analysis of non-transverse modes yields accuracy σ(γ) ≈ 0.01. Adding lensing information to the RSD survey improves \\sigma(\\gamma) by an amount equivalent to a 3x (10x) increase in RSD survey area when the spectroscopic survey extends down to halo mass 1013.5 (1014) h–1 M⊙. We also find that the σ(γ) of overlapping surveys is equivalent to that of surveys 1.5-2 times larger if they are separated on the sky. This gain is greatest when the spectroscopic mass threshold is 1013 -1014 h–1 M⊙, similar to LRG surveys. The gain of overlapping surveys is reduced for very deep or very shallow spectroscopic surveys, but any practical surveys are more powerful when overlapped than when separated. As a result, the gain of overlapped surveys is larger in the case when the primordial power spectrum normalization is uncertain by > 0.5%.« less
Vickers, D.; Thomas, C. K.
2014-09-16
Observations of the scale-dependent turbulent fluxes, variances, and the bulk transfer parameterization for sensible heat above, within, and beneath a tall closed Douglas-fir canopy in very weak winds are examined. The daytime sub-canopy vertical velocity spectra exhibit a double-peak structure with peaks at timescales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime sub-canopy heat flux co-spectra. The daytime momentum flux co-spectra in the upper bole space and in the sub-canopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of amore » mean wind direction, and subsequent partitioning of the momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the sub-canopy contribute to upward transfer of momentum, consistent with the observed sub-canopy secondary wind speed maximum. For the smallest resolved scales in the canopy at nighttime, we find increasing vertical velocity variance with decreasing timescale, consistent with very small eddies possibly generated by wake shedding from the canopy elements that transport momentum, but not heat. Unusually large values of the velocity aspect ratio within the canopy were observed, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the very dense canopy. The flux–gradient approach for sensible heat flux is found to be valid for the sub-canopy and above-canopy layers when considered separately in spite of the very small fluxes on the order of a few W m−2 in the sub-canopy. However, single-source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the sub-canopy and above-canopy layers. While sub-canopy Stanton numbers agreed well with values typically reported
atl?, Serap; Tan?r, Gne?
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Report on the Behavior of Fission Products in the Co-decontamination Process
Martin, Leigh Robert; Riddle, Catherine Lynn
2015-09-30
This document was prepared to meet FCT level 3 milestone M3FT-15IN0302042, “Generate Zr, Ru, Mo and Tc data for the Co-decontamination Process.” This work was carried out under the auspices of the Lab-Scale Testing of Reference Processes FCT work package. This document reports preliminary work in identifying the behavior of important fission products in a Co-decontamination flowsheet. Current results show that Tc, in the presence of Zr alone, does not behave as the Argonne Model for Universal Solvent Extraction (AMUSE) code would predict. The Tc distribution is reproducibly lower than predicted, with Zr distributions remaining close to the AMUSE code prediction. In addition, it appears there may be an intricate relationship between multiple fission product metals, in different combinations, that will have a direct impact on U, Tc and other important fission products such as Zr, Mo, and Rh. More extensive testing is required to adequately predict flowsheet behavior for these variances within the fission products.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
Optimization of Micro Metal Injection Molding By Using Grey Relational Grade
Ibrahim, M. H. I. [Dept. Of Mechanical Engineering, Universiti Tun Hussein Onn Malaysia (UTHM), 86400 Parit Raja, Batu Pahat, Johor (Malaysia); Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia); Muhamad, N.; Sulong, A. B.; Nor, N. H. M.; Harun, M. R.; Murtadhahadi [Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia); Jamaludin, K. R. [UTM Razak School of Engineering and Advanced Technology, UTM International Campus, 54100 Jalan Semarak, Kuala Lumpur (Malaysia)
2011-01-17
Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising method towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where Taguchi method associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversion from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.
Trace metal levels and partitioning in Wisconsin rivers: Results of background trace metals study
Shafer, M.M.; Overdier, J.T.; Armstrong, D.E.; Hurley, J.P.; Webb, D.A.
1994-12-31
Levels of total and filtrable Ag, Al, Cd, Cu, Pb, and Zn in 41 Wisconsin rivers draining watersheds of distinct homogeneous characteristics (land use/cover, soil type, surficial geology) were quantified. Levels, fluxes, and yields of trace metals are interpreted in terms of principal geochemical controls. The study samples were also used to evaluate the capability of modern ICP-MS techniques for ``background`` level quantification of metals. Order-of-magnitude variations in levels of a given metal between sites was measured. This large natural variance reflects influences of soil type, dissolved organic matter (DOC), ionic strength, and suspended particulate matter (SPM) on metal levels. Significant positive correlations between DOC levels and filtrable metal concentrations were observed, demonstrating the important role that DOC plays in metal speciation and behavior. Systematic, chemically consistent, differences in behavior between the metals is evident with partition coefficients (K,) and fraction in particulate forms ranking in the order: Al > Pb > Zn > Cr >Cd > Cu. Total metal yields correlate well with SPM yields, especially for highly partitioned elements, whereas filtrable metal yields reflect the interplay of partitioning and water yield. The State of Wisconsin will use these data in a re-evaluation of regulatory limits and in the development of water effects ratio criteria.
Hazardous waste identification: A guide to changing regulations
Stults, R.G. )
1993-03-01
The Resource Conservation and Recovery Act (RCRA) was enacting in 1976 and amended in 1984 by the Hazardous and Solid Waste Amendments (HSWA). Since then, federal regulations have generated a profusion of terms to identify and describe hazardous wastes. Regulations that5 define and govern management of hazardous wastes are codified in Title 40 of the code of Federal Regulations, Protection of the environment''. Title 40 regulations are divided into chapters, subchapters and parts. To be defined as hazardous, a waste must satisfy the definition of solid waste any discharged material not specifically excluded from regulation or granted a regulatory variance by the EPA Administrator. Some wastes and other materials have been identified as non-hazardous and are listed in 40 CFR 261.4(a) and 261.4(b). Certain wastes that satisfy the definition of hazardous waste nevertheless are excluded from regulation as hazardous if they meet specific criteria. Definitions and criteria for their exclusion are found in 40 CFR 261.4(c)-(f) and 40 CFR 261.5.
Pre-test CFD Calculations for a Bypass Flow Standard Problem
Rich Johnson
2011-11-01
The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.
Waliser, D; Sperber, K; Hendon, H; Kim, D; Maloney, E; Wheeler, M; Weickmann, K; Zhang, C; Donner, L; Gottschalck, J; Higgins, W; Kang, I; Legler, D; Moncrieff, M; Schubert, S; Stern, W; Vitart, F; Wang, B; Wang, W; Woolnough, S
2008-06-02
The Madden-Julian Oscillation (MJO) interacts with, and influences, a wide range of weather and climate phenomena (e.g., monsoons, ENSO, tropical storms, mid-latitude weather), and represents an important, and as yet unexploited, source of predictability at the subseasonal time scale. Despite the important role of the MJO in our climate and weather systems, current global circulation models (GCMs) exhibit considerable shortcomings in representing this phenomenon. These shortcomings have been documented in a number of multi-model comparison studies over the last decade. However, diagnosis of model performance has been challenging, and model progress has been difficult to track, due to the lack of a coherent and standardized set of MJO diagnostics. One of the chief objectives of the US CLIVAR MJO Working Group is the development of observation-based diagnostics for objectively evaluating global model simulations of the MJO in a consistent framework. Motivation for this activity is reviewed, and the intent and justification for a set of diagnostics is provided, along with specification for their calculation, and illustrations of their application. The diagnostics range from relatively simple analyses of variance and correlation, to more sophisticated space-time spectral and empirical orthogonal function analyses. These diagnostic techniques are used to detect MJO signals, to construct composite life-cycles, to identify associations of MJO activity with the mean state, and to describe interannual variability of the MJO.
Extragalactic foreground contamination in temperature-based CMB lens reconstruction
Osborne, Stephen J.; Hanson, Duncan; Dor, Olivier E-mail: dhanson@physics.mcgill.ca
2014-03-01
We discuss the effect of unresolved point source contamination on estimates of the CMB lensing potential, from components such as the thermal Sunyaev-Zel'dovich effect, radio point sources, and the Cosmic Infrared Background. We classify the possible trispectra associated with such source populations, and construct estimators for the amplitude and scale-dependence of several of the major trispectra. We show how to propagate analytical models for these source trispectra to biases for lensing. We also construct a ''source-hardened'' lensing estimator which experiences significantly smaller biases when exposed to unresolved point sources than the standard quadratic lensing estimator. We demonstrate these ideas in practice using the sky simulations of Sehgal et al., for cosmic-variance limited experiments designed to mimic ACT, SPT, and Planck. We find that for radio sources and SZ the bias is significantly reduced, but for CIB it is essentially unchanged. However, by using the high-frequency, all-sky CIB measurements from Planck and Herschel it may be possible to suppress this contribution.
Chou, Wen-Chi; Ma, Qin; Yang, Shihui; Cao, Sha; Klingeman, Dawn M.; Brown, Steven D.; Xu, Ying
2015-03-12
The identification of transcription units (TUs) encoded in a bacterial genome is essential to elucidation of transcriptional regulation of the organism. To gain a detailed understanding of the dynamically composed TU structures, we have used four strand-specific RNA-seq (ssRNA-seq) datasets collected under two experimental conditions to derive the genomic TU organization of Clostridium thermocellum using a machine-learning approach. Our method accurately predicted the genomic boundaries of individual TUs based on two sets of parameters measuring the RNA-seq expression patterns across the genome: expression-level continuity and variance. A total of 2590 distinct TUs are predicted based on the four RNA-seq datasets.more » Moreover, among the predicted TUs, 44% have multiple genes. We assessed our prediction method on an independent set of RNA-seq data with longer reads. The evaluation confirmed the high quality of the predicted TUs. Functional enrichment analyses on a selected subset of the predicted TUs revealed interesting biology. To demonstrate the generality of the prediction method, we have also applied the method to RNA-seq data collected on Escherichia coli and achieved high prediction accuracies. The TU prediction program named SeqTU is publicly available athttps://code.google.com/p/seqtu/. We expect that the predicted TUs can serve as the baseline information for studying transcriptional and post-transcriptional regulation in C. thermocellum and other bacteria.« less
STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-02-20
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.
Not Available
1980-05-01
This study is an effort to determine legal and technical constraints on the introduction of single entry longwall systems to US coal mining. US mandatory standards governing underground mining are compared and contrasted with regulations of certain foreign countries, mainly continental Europe, relating to the employment of longwall mining. Particular attention is paid to the planning and development of entries, the mining of longwall panels and consequent retrieval operations. Sequential mining of adjacent longwall panels is considered. Particular legal requirements, which constrain or prohibit single entry longwall mining in the US, are identified, and certain variances or exemptions from the regulations are described. The costs of single entry systems and of currently employed multiple entry systems are compared. Under prevailing US conditions multiple entry longwall is preferable because of safety, marginal economic benefit and compliance with US laws and regulations. However, where physical conditions become hazardous for the multiple entry method, for instance, in greater depth or in rockburst prone ground, mandatory standards, which now constrain or prohibit single entry workings, are of doubtful benefit. European methods would then provide single entry operation with improved strata control.
India's pulp and paper industry: Productivity and energy efficiency
Schumacher, Katja
1999-07-01
Historical estimates of productivity growth in India's pulp and paper sector vary from indicating an improvement to a decline in the sector's productivity. The variance may be traced to the time period of study, source of data for analysis, and type of indices and econometric specifications used for reporting productivity growth. The authors derive both statistical and econometric estimates of productivity growth for this sector. Their results show that productivity declined over the observed period from 1973-74 to 1993-94 by 1.1% p.a. Using a translog specification the econometric analysis reveals that technical progress in India's pulp and paper sector has been biased towards the use of energy and material, while it has been capital and labor saving. The decline in productivity was caused largely by the protection afforded by high tariffs on imported paper products and other policies, which allowed inefficient, small plants to enter the market and flourish. Will these trends continue into the future, particularly where energy use is concerned? The authors examine the current changes in structure and energy efficiency undergoing in the sector. Their analysis shows that with liberalization of the sector, and tighter environmental controls, the industry is moving towards higher efficiency and productivity. However, the analysis also shows that because these improvements are being hampered by significant financial and other barriers the industry might have a long way to go.
Density impact on performance of composite Si/graphite electrodes
Dufek, Eric J.; Picker, Michael; Petkovic, Lucia M.
2016-01-27
The ability of alkali-substituted binders for composite Si and graphite negative electrodes to minimize capacity fade for lithium ion batteries is investigated. Polymer films and electrodes are described and characterized by FTIR following immersion in electrolyte (1:2 EC:DMC) for 24 h. FTIR analysis following electrode formation displayed similar alkali-ion dependent shifts in peak location suggesting that changes in the vibrational structure of the binder are maintained after electrode formation. The Si and graphite composite electrodes prepared using the alkali-substituted polyacrylates were also exposed to electrochemical cycling and it has been found that the performance of the Na-substituted binder is superiormore » to a comparable density K-substituted system. However, in comparing performance across many different electrode densities attention needs to be placed on making comparisons at similar densities, as low density electrodes tend to exhibit lower capacity fade over cycling. This is highlighted by a 6% difference between a low density K-substituted electrode and a high density Na-substituted sample. As a result, this low variance between the two systems makes it difficult to quickly make a direct evaluation of binder performance unless electrode density is tightly controlled.« less
India's iron and steel industry: Productivity, energy efficiency and carbon emissions
Schumacher, Katja; Sathaye, Jayant
1998-10-01
Historical estimates of productivity growth in India's iron and steel sector vary from indicating an improvement to a decline in the sector's productivity. The variance may be traced to the time period of study, source of data for analysis, and type of indices and econometric specifications used for reporting productivity growth. The authors derive both growth accounting and econometric estimates of productivity growth for this sector. Their results show that over the observed period from 1973--74 to 1993--94 productivity declined by 1.71{percent} as indicated by the Translog index. Calculations of the Kendrick and Solow indices support this finding. Using a translog specification the econometric analysis reveals that technical progress in India's iron and steel sector has been biased towards the use of energy and material, while it has been capital and labor saving. The decline in productivity was caused largely by the protective policy regarding price and distribution of iron and steel as well as by large inefficiencies in public sector integrated steel plants. Will these trends continue into the future, particularly where energy use is concerned? Most likely they will not. The authors examine the current changes in structure and energy efficiency undergoing in the sector. Their analysis shows that with the liberalization of the iron and steel sector, the industry is rapidly moving towards world-best technology, which will result in fewer carbon emissions and more efficient energy use in existing and future plants.
India's Fertilizer Industry: Productivity and Energy Efficiency
Schumacher, K.; Sathaye, J.
1999-07-01
Historical estimates of productivity growth in India's fertilizer sector vary from indicating an improvement to a decline in the sector's productivity. The variance may be traced to the time period of study, source of data for analysis, and type of indices and econometric specifications used for reporting productivity growth. Our analysis shows that in the twenty year period, 1973 to 1993, productivity in the fertilizer sector increased by 2.3% per annum. An econometric analysis reveals that technical progress in India's fertilizer sector has been biased towards the use of energy, while it has been capital and labor saving. The increase in productivity took place during the era of total control when a retention price system and distribution control was in effect. With liberalization of the fertilizer sector and reduction of subsidies productivity declined substantially since the early 1990s. Industrial policies and fiscal incentives still play a major role in the Indian fertilizer sect or. As substantial energy savings and carbon reduction potential exists, energy policies can help overcome barriers to the adoption of these measures in giving proper incentives and correcting distorted prices.
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
SIMPLIFIED PHYSICS BASED MODELSRESEARCH TOPICAL REPORT ON TASK #2
Mishra, Srikanta; Ganesh, Priya
2014-10-31
We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.
An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal
Mattingly, John; Miller, Eric; Solomon, Clell J. Jr.; Dennis, Ben; Meldrum, Amy; Clarke, Shaun; Pozzi, Sara
2012-06-21
In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Griffin, Joshua D.; Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Simulation of winds as seen by a rotating vertical axis wind turbine blade
George, R.L.
1984-02-01
The objective of this report is to provide turbulent wind analyses relevant to the design and testing of Vertical Axis Wind Turbines (VAWT). A technique was developed for utilizing high-speed turbulence wind data from a line of seven anemometers at a single level to simulate the wind seen by a rotating VAWT blade. Twelve data cases, representing a range of wind speeds and stability classes, were selected from the large volume of data available from the Clayton, New Mexico, Vertical Plane Array (VPA) project. Simulations were run of the rotationally sampled wind speed relative to the earth, as well as the tangential and radial wind speeds, which are relative to the rotating wind turbine blade. Spectral analysis is used to compare and assess wind simulations from the different wind regimes, as well as from alternate wind measurement techniques. The variance in the wind speed at frequencies at or above the blade rotation rate is computed for all cases, and is used to quantitatively compare the VAWT simulations with Horizontal Axis Wind Turbine (HAWT) simulations. Qualitative comparisons are also made with direct wind measurements from a VAWT blade.
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.; Ross, Kyle; Cardoni, Jeffrey N; Kalinich, Donald A.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Ghosh, S. Tina
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)
Optimizing weak lensing mass estimates for cluster profile uncertainty
Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.
2011-09-11
Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Dependingmore » on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less
Fuel cycle cost uncertainty from nuclear fuel cycle comparison
Li, J.; McNelis, D.; Yim, M.S.
2013-07-01
This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for the discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.
MM-Estimator and Adjusted Super Smoother based Simultaneous Prediction Confedenc
Energy Science and Technology Software Center (OSTI)
2002-07-19
A Novel Application of Regression Analysis (MM-Estimator) with Simultaneous Prediction Confidence Intervals are proposed to detect up- or down-regulated genes, which are outliers in scatter plots based on log-transformed red (Cy5 fluorescent dye) versus green (Cy3 fluorescent Dye) intensities. Advantages of the application: 1) Robust and Resistant MM-Estimator is a Reliable Method to Build Linear Regression In the presence of Outliers, 2) Exploratory Data Analysis Tools (Boxplots, Averaged Shifted Histograms, Quantile-Quantile Normal Plots and Scattermore » Plots) are Unsed to Test Visually underlying assumptions of linearity and Contaminated Normality in Microarray data), 3) Simultaneous prediction confidence intervals (SPCIs) Guarantee a desired confidence level across the whole range of the data points used for the scatter plots. Results of the outlier detection procedure is a set of significantly differentially expressed genes extracted from the employed microarray data set. A scatter plot smoother (super smoother or locally weighted regression) is used to quantify heteroscendasticity is residual variance (Commonly takes place in lower and higher intensity areas). The set of differentially expressed genes is quantified using interval estimates for P-values as a probabilistic measure of being outlier by chance. Monte Carlo simultations are used to adjust super smoother-based SPCIs.her.« less
Nazarian, Dalar; Ganesh, P.; Sholl, David S.
2015-09-30
We compiled a test set of chemically and topologically diverse Metal–Organic Frameworks (MOFs) with high accuracy experimentally derived crystallographic structure data. The test set was used to benchmark the performance of Density Functional Theory (DFT) functionals (M06L, PBE, PW91, PBE-D2, PBE-D3, and vdW-DF2) for predicting lattice parameters, unit cell volume, bonded parameters and pore descriptors. On average PBE-D2, PBE-D3, and vdW-DF2 predict more accurate structures, but all functionals predicted pore diameters within 0.5 Å of the experimental diameter for every MOF in the test set. The test set was also used to assess the variance in performance of DFT functionals for elastic properties and atomic partial charges. The DFT predicted elastic properties such as minimum shear modulus and Young's modulus can differ by an average of 3 and 9 GPa for rigid MOFs such as those in the test set. Moreover, we calculated the partial charges by vdW-DF2 deviate the most from other functionals while there is no significant difference between the partial charges calculated by M06L, PBE, PW91, PBE-D2 and PBE-D3 for the MOFs in the test set. We find that while there are differences in the magnitude of the properties predicted by the various functionals, these discrepancies are small compared to the accuracy necessary for most practical applications.
Water Velocity Measurements on a Vertical Barrier Screen at the Bonneville Dam Second Powerhouse
Hughes, James S.; Deng, Zhiqun; Weiland, Mark A.; Martinez, Jayson J.; Yuan, Yong
2011-11-22
Fish screens at hydroelectric dams help to protect rearing and migrating fish by preventing them from passing through the turbines and directing them towards the bypass channels by providing a sweeping flow parallel to the screen. However, fish screens may actually be harmful to fish if they become impinged on the surface of the screen or become disoriented due to poor flow conditions near the screen. Recent modifications to the vertical barrier screens (VBS) at the Bonneville Dam second powerhouse (B2) intended to increase the guidance of juvenile salmonids into the juvenile bypass system (JBS) have resulted in high mortality and descaling rates of hatchery subyearling Chinook salmon during the 2008 juvenile salmonid passage season. To investigate the potential cause of the high mortality and descaling rates, an in situ water velocity measurement study was conducted using acoustic Doppler velocimeters (ADV) in the gatewell slot at Units 12A and 14A of B2. From the measurements collected the average approach velocity, sweep velocity, and the root mean square (RMS) value of the velocity fluctuations were calculated. The approach velocities measured across the face of the VBS varied but were mostly less than 0.3 m/s. The sweep velocities also showed large variances across the face of the VBS with most measurements being less than 1.5 m/s. This study revealed that the approach velocities exceeded criteria recommended by NOAA Fisheries and Washington State Department of Fish and Wildlife intended to improve fish passage conditions.
Optimizing weak lensing mass estimates for cluster profile uncertainty
Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.
2011-09-11
Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M_{200m }due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement M_{ap} that minimizes the mass estimate variance <(M_{ap} - M_{200m})^{2}> in the presence of all these forms of variability. Depending on halo mass and observational conditions, the resulting mass estimator improves on M_{ap} filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.
Alternative disposal options for alpha-mixed low-level waste
Loomis, G.G.; Sherick, M.J.
1995-12-31
This paper presents several disposal options for the Department of Energy alpha-mixed low-level waste. The mixed nature of the waste favors thermally treating the waste to either an iron-enriched basalt or glass waste form, at which point a multitude of reasonable disposal options, including in-state disposal, are a possibility. Most notably, these waste forms will meet the land-ban restrictions. However, the thermal treatment of this waste involves considerable waste handling and complicated/expensive offgas, systems with secondary waste management problems. In the United States, public perception of off gas systems in the radioactive incinerator area is unfavorable. The alternatives presented here are nonthermal in nature and involve homogenizing the waste with cryogenic techniques followed by complete encapsulation with a variety of chemical/grouting agents into retrievable waste forms. Once encapsulated, the waste forms are suitable for transport out of the state or for actual in-state disposal. This paper investigates variances that would have to be obtained and contrasts the alternative encapsulation idea with the thermal treatment option.
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Zhao, Chun; Lin, Guang; Zhou, Tianjun
2014-09-01
In this study, we investigate the sensitivity of simulated shallow cumulus and stratocumulus clouds to selected tunable parameters of Cloud Layers Unified by Binormals (CLUBB) in the single column version of Community Atmosphere Model version 5 (SCAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is adopted to study the responses of simulated cloud fields to tunable parameters. One stratocumulus and two shallow convection cases are configured at both coarse and fine vertical resolutions in this study.. Our results show that most of the variance in simulated cloud fields can be explained by a small number of tunable parameters. The parameters related to Newtonian and buoyancy-damping terms of total water flux are found to be the most influential parameters for stratocumulus. For shallow cumulus, the most influential parameters are those related to skewness of vertical velocity, reflecting the strong coupling between cloud properties and dynamics in this regime. The influential parameters in the stratocumulus case are sensitive to the choice of the vertical resolution while little sensitivity is found for the shallow convection cases, as eddy mixing length (or dissipation time scale) plays a more important role and depends more strongly on the vertical resolution in stratocumulus than in shallow convections. The influential parameters remain almost unchanged when the number of tunable parameters increases from 16 to 35. This study improves understanding of the CLUBB behavior associated with parameter uncertainties.
MAVTgsa: An R Package for Gene Set (Enrichment) Analysis
Chien, Chih-Yi; Chang, Ching-Wei; Tsai, Chen-An; Chen, James J.
2014-01-01
Gene semore » t analysis methods aim to determine whether an a priori defined set of genes shows statistically significant difference in expression on either categorical or continuous outcomes. Although many methods for gene set analysis have been proposed, a systematic analysis tool for identification of different types of gene set significance modules has not been developed previously. This work presents an R package, called MAVTgsa, which includes three different methods for integrated gene set enrichment analysis. (1) The one-sided OLS (ordinary least squares) test detects coordinated changes of genes in gene set in one direction, either up- or downregulation. (2) The two-sided MANOVA (multivariate analysis variance) detects changes both up- and downregulation for studying two or more experimental conditions. (3) A random forests-based procedure is to identify gene sets that can accurately predict samples from different experimental conditions or are associated with the continuous phenotypes. MAVTgsa computes the P values and FDR (false discovery rate) q -value for all gene sets in the study. Furthermore, MAVTgsa provides several visualization outputs to support and interpret the enrichment results. This package is available online.« less
Sailer, S.J.
1996-08-01
This Quality Assurance Project Plan (QAPJP) specifies the quality of data necessary and the characterization techniques employed at the Idaho National Engineering Laboratory (INEL) to meet the objectives of the Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP) Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) requirements. This QAPJP is written to conform with the requirements and guidelines specified in the QAPP and the associated documents referenced in the QAPP. This QAPJP is one of a set of five interrelated QAPjPs that describe the INEL Transuranic Waste Characterization Program (TWCP). Each of the five facilities participating in the TWCP has a QAPJP that describes the activities applicable to that particular facility. This QAPJP describes the roles and responsibilities of the Idaho Chemical Processing Plant (ICPP) Analytical Chemistry Laboratory (ACL) in the TWCP. Data quality objectives and quality assurance objectives are explained. Sample analysis procedures and associated quality assurance measures are also addressed; these include: sample chain of custody; data validation; usability and reporting; documentation and records; audits and 0385 assessments; laboratory QC samples; and instrument testing, inspection, maintenance and calibration. Finally, administrative quality control measures, such as document control, control of nonconformances, variances and QA status reporting are described.
Not Available
1982-07-01
This manual provides general guidance for Department of Energy (DOE) officials for complying with Sect. 402 of the Clean Water Act (CWA) of 1977 and amendments. Section 402 authorizes the US Environmental Protection Agency (EPA) or states with EPA approved programs to issue National Pollutant Discharge Elimination System (NPDES) permits for the direct discharge of waste from a point source into waters of the United States. Although the nature of a project dictates the exact information requirements, every project has similar information requirements on the environmental setting, type of discharge(s), characterization of effluent, and description of operations and wastewater treatment. Additional information requirements for projects with ocean discharges, thermal discharges, and cooling water intakes are discussed. Guidance is provided in this manual on general methods for collecting, analyzing, and presenting information for an NPDES permit application. The NPDES program interacts with many sections of the CWA; therefore, background material on pertinent areas such as effluent limitations, water quality standards, toxic substances, and nonpoint source pollutants is included in this manual. Modifications, variances, and extensions applicable to NPDES permits are also discussed.
EXPECTED LARGE SYNOPTIC SURVEY TELESCOPE (LSST) YIELD OF ECLIPSING BINARY STARS
Prsa, Andrej; Pepper, Joshua; Stassun, Keivan G.
2011-08-15
In this paper, we estimate the Large Synoptic Survey Telescope (LSST) yield of eclipsing binary stars, which will survey {approx}20,000 deg{sup 2} of the southern sky during a period of 10 years in six photometric passbands to r {approx} 24.5. We generate a set of 10,000 eclipsing binary light curves sampled to the LSST time cadence across the whole sky, with added noise as a function of apparent magnitude. This set is passed to the analysis-of-variance period finder to assess the recoverability rate for the periods, and the successfully phased light curves are passed to the artificial-intelligence-based pipeline ebai to assess the recoverability rate in terms of the eclipsing binaries' physical and geometric parameters. We find that, out of {approx}24 million eclipsing binaries observed by LSST with a signal-to-noise ratio >10 in mission lifetime, {approx}28% or 6.7 million can be fully characterized by the pipeline. Of those, {approx}25% or 1.7 million will be double-lined binaries, a true treasure trove for stellar astrophysics.
Single-qubit tests of Bell-like inequalities
Zela, F. de
2007-10-15
This paper discusses some tests of Bell-like inequalities not requiring entangled states. The proposed tests are based on consecutive measurements on a single qubit. Available hidden-variable models for a single qubit [see, e.g., J. S. Bell, Rev. Mod. Phys. 38, 447 (1966)] reproduce the predictions of quantum mechanics and hence violate the Bell-like inequalities addressed in this paper. It is shown how this fact is connected with the state 'collapse' and with its random nature. Thus, it becomes possible to test truly realistic and deterministic hidden-variable models. In this way, it can be shown that a hidden-variable model should entail at least one of the following features: (i) nonlocality, (ii) contextuality, or (iii) discontinuous measurement-dependent probability functions. The last two features are put to the test with the experiments proposed in this paper. A hidden-variable model that is noncontextual and deterministic would be at variance with some predictions of quantum mechanics. Furthermore, the proposed tests are more likely to be loophole-free, as compared to former ones.
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Greg J. Shott, Vefa Yucel, Lloyd Desotell; Non-Nstec Authors: G. Pyles and Jon Carilli
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
Tracking stochastic resonance curves using an assisted reference model
Caldern Ramrez, Mario; Rico Martnez, Ramiro; Parmananda, P.
2015-06-15
The optimal noise amplitude for Stochastic Resonance (SR) is located employing an Artificial Neural Network (ANN) reference model with a nonlinear predictive capability. A modified Kalman Filter (KF) was coupled to this reference model in order to compensate for semi-quantitative forecast errors. Three manifestations of stochastic resonance, namely, Periodic Stochastic Resonance (PSR), Aperiodic Stochastic Resonance (ASR), and finally Coherence Resonance (CR) were considered. Using noise amplitude as the control parameter, for the case of PSR and ASR, the cross-correlation curve between the sub-threshold input signal and the system response is tracked. However, using the same parameter the Normalized Variance curve is tracked for the case of CR. The goal of the present work is to track these curves and converge to their respective extremal points. The ANN reference model strategy captures and subsequently predicts the nonlinear features of the model system while the KF compensates for the perturbations inherent to the superimposed noise. This technique, implemented in the FitzHugh-Nagumo model, enabled us to track the resonance curves and eventually locate their optimal (extremal) values. This would yield the optimal value of noise for the three manifestations of the SR phenomena.
David Muth, Jr.; Jared Abodeely; Richard Nelson; Douglas McCorkle; Joshua Koch; Kenneth Bryden
2011-08-01
Agricultural residues have significant potential as a feedstock for bioenergy production, but removing these residues can have negative impacts on soil health. Models and datasets that can support decisions about sustainable agricultural residue removal are available; however, no tools currently exist capable of simultaneously addressing all environmental factors that can limit availability of residue. The VE-Suite model integration framework has been used to couple a set of environmental process models to support agricultural residue removal decisions. The RUSLE2, WEPS, and Soil Conditioning Index models have been integrated. A disparate set of databases providing the soils, climate, and management practice data required to run these models have also been integrated. The integrated system has been demonstrated for two example cases. First, an assessment using high spatial fidelity crop yield data has been run for a single farm. This analysis shows the significant variance in sustainably accessible residue across a single farm and crop year. A second example is an aggregate assessment of agricultural residues available in the state of Iowa. This implementation of the integrated systems model demonstrates the capability to run a vast range of scenarios required to represent a large geographic region.
Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia
Gao, Weimin; Francis, Arokiasamy J.
2013-01-01
Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction bymore » clostridia demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
2D stochastic-integral models for characterizing random grain noise in titanium alloys
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.
2014-02-18
We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Love (K-L) expansion for the random Euler angles, ? and ?, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.
A Two-Stage Kalman Filter Approach for Robust and Real-Time Power System State Estimation
Zhang, Jinghe; Welch, Greg; Bishop, Gary; Huang, Zhenyu
2014-04-01
As electricity demand continues to grow and renewable energy increases its penetration in the power grid, realtime state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by Phasor Measurement Units (PMU). In this paper we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However in practice the process and measurement noise models are usually difficult to obtain. Thus we have developed the Adaptive Kalman Filter with Inflatable Noise Variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. Simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements.
Daily diaries of respiratory symptoms and air pollution: Methodological issues and results
Schwartz, J. ); Wypij, D.; Dockery D.; Ware, J.; Spengler, J.; Ferris, B. Jr. ); Zeger, S. )
1991-01-01
Daily diaries of respiratory symptoms are a powerful technique for detecting acute effects of air pollution exposure. While conceptually simple, these diary studies can be difficult to analyze. The daily symptom rates are highly correlated, even after adjustment for covariates, and this lack of independence must be considered in the analysis. Possible approaches include the use of incidence instead of prevalence rates and autoregressive models. Heterogeneity among subjects also induces dependencies in the data. These can be addressed by stratification and by two-stage models such as those developed by Korn and Whittemore. These approaches have been applied to two data sets: a cohort of school children participating in the Harvard Six Cities Study and a cohort of student nurses in Los Angeles. Both data sets provide evidence of autocorrelation and heterogeneity. Controlling for autocorrelation corrects the precision estimates, and because diary data are usually positively autocorrelated, this leads to larger variance estimates. Controlling for heterogeneity among subjects appears to increase the effect sizes for air pollution exposure. Preliminary results indicate associations between sulfur dioxide and cough incidence in children and between nitrogen dioxide and phlegm incidence in student nurses.
Dynamics of dispersive photon-number QND measurements in a micromaser
Kozlovskii, A. V. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)], E-mail: kozlovsk@sci.lebedev.ru
2007-04-15
A numerical analysis of dispersive quantum nondemolition measurement of the photon number of a microwave cavity field is presented. Simulations show that a key property of the dispersive atom-field interaction used in Ramsey interferometry is the extremely high sensitivity of the dynamics of atomic and field states to basic parameters of the system. When a monokinetic atomic beam is sent through a microwave cavity, a qualitative change in the field state can be caused by an uncontrollably small deviation of parameters (such as atom path length through the cavity, atom velocity, cavity mode frequency detuning, or atom-field coupling constants). The resulting cavity field can be either in a Fock state or in a super-Poissonian state (characterized by a large photon-number variance). When the atoms have a random velocity spread, the field is squeezed to a Fock state for arbitrary values of the system's parameters. However, this makes detection of Ramsey fringes impossible, because the probability of detecting an atom in the upper or lower electronic state becomes a random quantity almost uniformly distributed over the interval between zero and unity, irrespective of the cavity photon number.
Technologies for Production of Heat and Electricity
Jacob J. Jacobson; Kara G. Cafferty
2014-04-01
Biomass is a desirable source of energy because it is renewable, sustainable, widely available throughout the world, and amenable to conversion. Biomass is composed of cellulose, hemicellulose, and lignin components. Cellulose is generally the dominant fraction, representing about 40 to 50% of the material by weight, with hemicellulose representing 20 to 50% of the material, and lignin making up the remaining portion [4,5,6]. Although the outward appearance of the various forms of cellulosic biomass, such as wood, grass, municipal solid waste (MSW), or agricultural residues, is different, all of these materials have a similar cellulosic composition. Elementally, however, biomass varies considerably, thereby presenting technical challenges at virtually every phase of its conversion to useful energy forms and products. Despite the variances among cellulosic sources, there are a variety of technologies for converting biomass into energy. These technologies are generally divided into two groups: biochemical (biological-based) and thermochemical (heat-based) conversion processes. This chapter reviews the specific technologies that can be used to convert biomass to energy. Each technology review includes the description of the process, and the positive and negative aspects.
Chemical composition of Hanford Tank SY-102
Birnbaum, E.; Agnew, S.; Jarvinen, G.; Yarbro, S.
1993-12-01
The US Department of Energy established the Tank Waste Remediation System (TWRS) to safely manage and dispose of the radioactive waste, both current and future, stored in double-shell and single-shell tanks at the Hanford sites. One major program element in TWRS is pretreatment which was established to process the waste prior to disposal using the Hanford Waste Vitrification Plant. In support of this program, Los Alamos National Laboratory has developed a conceptual process flow sheet which will remediate the entire contents of a selected double-shelled underground waste tank, including supernatant and sludge, into forms that allow storage and final disposal in a safe, cost-effective and environmentally sound manner. The specific tank selected for remediation is 241-SY-102 located in the 200 West Area. As part of the flow sheet development effort, the composition of the tank was defined and documented. This database was built by examining the history of liquid waste transfers to the tank and by performing careful analysis of all of the analytical data that have been gathered during the tank`s lifetime. In order to more completely understand the variances in analytical results, material and charge balances were done to help define the chemistry of the various components in the tank. This methodology of defining the tank composition and the final results are documented in this report.
Wind Measurements from Arc Scans with Doppler Wind Lidar
Wang, H.; Barthelmie, R. J.; Clifton, Andy; Pryor, S. C.
2015-11-25
When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of itsmore » high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.« less
Edie, P.C.
1981-01-01
This report is intended to supply the electric vehicle manufacturer with performance data on the General Electric 5BT 2366C10 series wound dc motor and EV-1 chopper controller. Data are provided for both straight and chopped dc input to the motor, at 2 motor temperature levels. Testing was done at 6 voltage increments to the motor, and 2 voltage increments to the controller. Data results are presented in both tabular and graphical forms. Tabular information includes motor voltage and current input data, motor speed and torque output data, power data and temperature data. Graphical information includes torque-speed, motor power output-speed, torque-current, and efficiency-speed plots under the various operating conditions. The data resulting from this testing shows the speed-torque plots to have the most variance with operating temperature. The maximum motor efficiency is between 86% and 87%, regardless of temperature or mode of operation. When the chopper is utilized, maximum motor efficiency occurs when the chopper duty cycle approaches 100%. At low duty cycles the motor efficiency may be considerably less than the efficiency for straight dc. Chopper efficiency may be assummed to be 95% under all operating conditions. For equal speeds at a given voltage level, the motor operated in the chopped mode develops slightly more torque than it does in the straight dc mode. System block diagrams are included, along with test setup and procedure information.
Potosnak, M.; LeStourgeon, Lauren; Pallardy, Stephen G.; Hosman, Kevin P.; Gu, Lianghong; Karl, Thomas; Geron, Chris; Guenther, Alex B.
2014-02-19
Ecosystem fluxes of isoprene emission were measured during the majority of the 2011 growing season at the University of Missouri's Baskett Wildlife Research and Education Area in centralMissouri, USA (38.7° N, 92.2° W). This broadleaf deciduous forest is typical of forests common in theOzarks region of the central United States. The goal of the isoprene flux measurements was to test ourunderstanding of the controls on isoprene emission from the hourly to the seasonal timescale using a state-of-the-art emission model, MEGAN (Model of Emissions of Gases and Aerosols from Nature). Isoprene emission rates were very high from the forest with a maximum of 50.9 mg m-2 hr-1 (208 nmol m-2 s-1), which to our knowledge exceeds all other reports of canopy-scale isoprene emission. The fluxes showed a clear dependence on the previous temperature and light regimes which was successfully captured by the existing algorithms in MEGAN. During a period of drought, MEGAN was unable to reproduce the time-dependent response of isoprene emission to water stress. Overall, the performance of MEGAN was robust and could explain 87% of the observed variance in the measured fluxes, but the response of isoprene emission to drought stress is a major source of uncertainty.
Characteristics of surface current flow inferred from a global ocean current data set
Meehl, G.A.
1982-06-01
A seasonal global ocean-current data set (OCDS) digitized on a 5/sup 0/ grid from long-term mean shipdrift-derived currents from pilot charts is presented and described. Annual zonal means of v-component currents show subtropical convergence zones which moved closest to the equator during the respective winters in each hemisphere. Net annual v-component surface flow at the equator is northward. Zonally average u-component currents have greatest seasonal variance in the tropics with strongest westward currents in the winter hemisphere. An ensemble of ocean currents measured by buoys and current meters compares favorably with OCDS data in spite of widely varying time and space scales. The OCDS currents and directly measured currents are about twice as large as computed geostrophic currents. An analysis of equatorial Pacific currents suggests that dynamic topography and sea-level change indicative of the geostrophic flow component cannot be relied on solely to infer absolute strength of surface currents which include a strong Ekman component. Comparison of OCDS v-component currents and meridional transports predicted by Ekman theory shows agreement in the sign of transports in the midlatitudes and tropics in both hemispheres. Ekman depths required to scale OCDS v-component currents to computed Ekman transports are reasonable at most latitudes with layer depths deepening closer to the equator.
Evans, K.C.
1981-01-01
Rock and water samples were collected from the Morey Peak 15' quadrangle in the northern Hot Creek Range, Nye County, Nevada. The water was analyzed for trace element content. The rock samples were analyzed for oxide composition, selected trace element composition, and K-U-Th content. Water sampled from springs, hot springs, and creeks had temperatures ranging between 8 and 60/sup 0/C, conductance between 100 and 1800 mhos, pH between 5.4 and 8.7, and Eh between -106 and -301 mv. Factor analysis of the sample set reveals a U-Li-Se association and also identifies oxidized galena in solution. This analysis suggests that water need only be analyzed for copper, lithium, selenium, uranium, and vanadium in an uranium exploration program. Oxide analyses of these samples show gross similarities to other volcanic suites in central Nevada. Correlation coefficients indicate that in this area uranium is independent of any of the tested variables. Factor analyses suggest that there is an association between beryllium and uranium, arsenic and uranium, and selenium and uranium. The K-U-Th analyses yielded a wide variance of measurements. Potassium content ranges between 0 and 7.66%, thorium between 0 and 39.09 ppM, and uranium between 0.04 and 1744 ppM. Disequilibrium plots suggest that the mineralization present in the Corral Canyon area is late.
X chromosome aneuploidy in infertile women: Analysis by interphase fluorescent in situ hybridization
Morris, M.A.; Moix, I.; Mermillod, B.
1994-09-01
Up to 1 in 3 couples have a problem of infertility at some time in their lives. Sex chromosome anomalies are found in 5-10% of couples, with mosaic aneuploidy being a common finding in primary infertility. Recurrent spontaneous abortion (RSA), in contrast, is frequently associated with autosomal structural anomalies. We hypothesized that low-level mosaic X chromosome aneuploidy was associated with primary infertility but not with RSA. Three groups were studied: women from couples with primary infertillity (n=26); women with three or more spontaneous abortions (n=22); and age-matched normally fertile women (at least two pregnancies; n=28). Interphase fluorescent in situ hybridization (FISH) was used to determine X chromosome ploidy in 100 nuclei per patient, using a contig of three cosmids from MAO locus (kindly donated by W. Berger, Nijmegen). A control probe (chr. 15 centromere) was simultaneously hybridized, and only nuclei containing two control signals were scored for the X chromosome. The mean numbers of nuclei with two X chromosome signals were the same in all groups (Welch equality of means test: p>0.97). However, there is a significant difference between the variances of the primary infertile and RSA groups (Levene`s test: p=0.025 after Bonferrone correction for multiple testing). This provides preliminary support for the hypothesis of an association between primary infertility and low-level mosaic X chromosome aneuploidy.
Dissipation and Fluctuation at the Chiral Phase Transition
Biro, T.S.; Greiner, C.
1997-10-01
Utilizing the Langevin equation for the linear {sigma} model we investigate the interplay of friction and white noise on the evolution and stability of collective pionic fields in energetic heavy ion collisions. We find that the smaller the volume, the more stable transverse (pionic) fluctuations become on a homogeneous disoriented chiral field background (the average transverse mass {l_angle}m{sup 2}{sub t}{r_angle} increases). On the other hand the variance of m{sup 2}{sub t} increases even more, so for a system thermalized in an initial volume of 10 fm{sup 3} about 96{percent} and even in 1000 fm{sup 3} about 60{percent} of the individual trajectories enter into unstable regions (m{sup 2}{sub t}{lt}0 ) for a while during a rapid one-dimensional expansion ({tau}{sub 0}=1 fm /c ). In contrast the ensemble averaged solution in this case remains stable. This result supports the idea of looking for disoriented chiral condensate (DCC) formation in individual events. {copyright} {ital 1997} {ital The American Physical Society}
Genetic studies of DRD4 and clinical response to neuroleptic medications
Kennedy, J.L.; Petronis, A.; Gao, J.
1994-09-01
Clozapine is an atypical antipsychotic drug that, like most other medications, is effective for some people and not for others. This variable response across individuals is likely significantly determined by genetic factors. An important candidate gene to investigate in clozapine response is the dopamine D4 receptor gene (DRD4). The D4 receptor has a higher affinity for clozapine than any of the other dopamine receptors. Furthermore, recent work by our consortium has shown a remarkable level of variability in the part of the gene coding for the third cytoplasmic loop. We have also identified polymorphisms in the upstream 5{prime} putative regulatory region and at two other sites. These polymorphisms were typed in a group of treatment-resistant schizophrenia subjects who were subsequently placed on clozapine (n = 60). In a logistic regression analysis, we compared genotype at the DRD4 polymorphism to response versus non-response to clozapine. Neither the exon-III nor any of the 5{prime} polymorphisms alone significantly predicted response; however, when the information from these polymorphisms was combined, more predictive power was obtained. In a correspondence analysis of the four DRD4 polymorphisms vs. response, we were able to predict 76% of the variance in response. Refinement of the analyses will include assessment of subfactors involved in clinical response phenotype and incorporation of the debrisoquine metabolizing locus (CYP2D6) into the prediction algorithm.
The primordial helium abundance from updated emissivities
Aver, Erik; Olive, Keith A.; Skillman, Evan D.; Porter, R.L. E-mail: olive@umn.edu E-mail: skillman@astro.umn.edu
2013-11-01
Observations of metal-poor extragalactic H II regions allow the determination of the primordial helium abundance, Y{sub p}. The He I emissivities are the foundation of the model of the H II region's emission. Porter, Ferland, Storey, and Detisch (2012) have recently published updated He I emissivities based on improved photoionization cross-sections. We incorporate these new atomic data and update our recent Markov Chain Monte Carlo analysis of the dataset published by Izotov, Thuan, and Stasi'nska (2007). As before, cuts are made to promote quality and reliability, and only solutions which fit the data within 95% confidence level are used to determine the primordial He abundance. The previously qualifying dataset is almost entirely retained and with strong concordance between the physical parameters. Overall, an upward bias from the new emissivities leads to a decrease in Y{sub p}. In addition, we find a general trend to larger uncertainties in individual objects (due to changes in the emissivities) and an increased variance (due to additional objects included). From a regression to zero metallicity, we determine Y{sub p} = 0.2465 ± 0.0097, in good agreement with the BBN result, Y{sub p} = 0.2485 ± 0.0002, based on the Planck determination of the baryon density. In the future, a better understanding of why a large fraction of spectra are not well fit by the model will be crucial to achieving an increase in the precision of the primordial helium abundance determination.
RELAXATION OF WARPED DISKS: THE CASE OF PURE HYDRODYNAMICS
Sorathia, Kareem A.; Krolik, Julian H.; Hawley, John F.
2013-05-10
Orbiting disks may exhibit bends due to a misalignment between the angular momentum of the inner and outer regions of the disk. We begin a systematic simulational inquiry into the physics of warped disks with the simplest case: the relaxation of an unforced warp under pure fluid dynamics, i.e., with no internal stresses other than Reynolds stress. We focus on the nonlinear regime in which the bend rate is large compared to the disk aspect ratio. When warps are nonlinear, strong radial pressure gradients drive transonic radial motions along the disk's top and bottom surfaces that efficiently mix angular momentum. The resulting nonlinear decay rate of the warp increases with the warp rate and the warp width, but, at least in the parameter regime studied here, is independent of the sound speed. The characteristic magnitude of the associated angular momentum fluxes likewise increases with both the local warp rate and the radial range over which the warp extends; it also increases with increasing sound speed, but more slowly than linearly. The angular momentum fluxes respond to the warp rate after a delay that scales with the square root of the time for sound waves to cross the radial extent of the warp. These behaviors are at variance with a number of the assumptions commonly used in analytic models to describe linear warp dynamics.
Spoil handling and reclamation costs at a contour surface mine in steep slope Appalachian topography
Zipper, C.E.; Hall, A.T.; Daniels, W.L.
1985-12-09
Accurate overburden handling cost estimation methods are essential to effective pre-mining planning for post-mining landforms and land uses. With the aim of developing such methods, the authors have been monitoring costs at a contour surface mine in Wise County, Virginia since January 1, 1984. Early in the monitoring period, the land was being returned to its Approximate Original Contour (AOC) in a manner common to the Appalachian region since implementation of the Surface Mining Control and Reclamation Act of 1977 (SMCRA). More recently, mining has been conducted under an experimental variance from the AOC provisions of SMCRA which allowed a near-level bench to be constructed across the upper surface of two mined points and an intervening filled hollow. All mining operations are being recorded by location. The cost of spoil movement is calculated for each block of coal mined between January 1, 1984, and August 1, 1985. Per cubic yard spoil handling and reclamation costs are compared by mining block. The average cost of spoil handling was $1.90 per bank cubic yard; however, these costs varied widely between blocks. The reasons for those variations included the landscape positions of the mining blocks and spoil handling practices. The average reclamation cost was $0.08 per bank cubic yard of spoil placed in the near level bench on the mined point to $0.20 for spoil placed in the hollow fill. 2 references, 4 figures.
Dwivedi, Gopal; Viswanathan, Vaishak; Sampath, Sanjay; Shyam, Amit; Lara-Curzio, Edgar
2014-06-09
Fracture toughness has become one of the dominant design parameters that dictates the selection of materials and their microstructure to obtain durable thermal barrier coatings (TBCs). Much progress has been made in characterizing the fracture toughness of relevant TBC compositions in bulk form, and it has become apparent that this property is significantly affected by process-induced microstructural defects. In this investigation, a systematic study of the influence of coating microstructure on the fracture toughness of atmospheric plasma sprayed (APS) TBCs has been carried out. Yttria partially stabilized zirconia (YSZ) coatings were fabricated under different spray process conditions inducing different levels of porosity and interfacial defects. Fracture toughness was measured on free standing coatings in as-processed and thermally aged conditions using the double torsion technique. Results indicate significant variance in fracture toughness among coatings with different microstructures including changes induced by thermal aging. Comparative studies were also conducted on an alternative TBC composition, Gd_{2}Zr_{2}O_{7} (GDZ), which as anticipated shows significantly lower fracture toughness compared to YSZ. Furthermore, the results from these studies not only point towards a need for process and microstructure optimization for enhanced TBC performance but also a framework for establishing performance metrics for promising new TBC compositions.
On the equivalence of the RTI and SVM approaches to time correlated analysis
Croft, S.; Favalli, A.; Henzlova, D.; Santi, P. A.
2014-11-21
Recently two papers on how to perform passive neutron auto-correlation analysis on time gated histograms formed from pulse train data, generically called time correlation analysis (TCA), have appeared in this journal [1,2]. For those of us working in international nuclear safeguards these treatments are of particular interest because passive neutron multiplicity counting is a widely deployed technique for the quantification of plutonium. The purpose of this letter is to show that the skewness-variance-mean (SVM) approach developed in [1] is equivalent in terms of assay capability to the random trigger interval (RTI) analysis laid out in [2]. Mathematically we could also use other numerical ways to extract the time correlated information from the histogram data including for example what we might call the mean, mean square, and mean cube approach. The important feature however, from the perspective of real world applications, is that the correlated information extracted is the same, and subsequently gets interpreted in the same way based on the same underlying physics model.
Features in the primordial power spectrum? A frequentist analysis
Hamann, Jan; Shafieloo, Arman; Souradeep, Tarun E-mail: a.shafieloo1@physics.ox.ac.uk
2010-04-01
Features in the primordial power spectrum have been suggested as an explanation for glitches in the angular power spectrum of temperature anisotropies measured by the WMAP satellite. However, these glitches might just as well be artifacts of noise or cosmic variance. Using the effective Δχ{sup 2} between the best-fit power-law spectrum and a deconvolved primordial spectrum as a measure of ''featureness'' of the data, we perform a full Monte-Carlo analysis to address the question of how significant the recovered features are. We find that in 26% of the simulated data sets the reconstructed spectrum yields a greater improvement in the likelihood than for the actually observed data. While features cannot be categorically ruled out by this analysis, and the possibility remains that simple theoretical models which predict some of the observed features might stand up to rigorous statistical testing, our results suggest that WMAP data are consistent with the assumption of a featureless power-law primordial spectrum.
On the equivalence of the RTI and SVM approaches to time correlated analysis
Croft, S.; Favalli, A.; Henzlova, D.; Santi, P. A.
2014-11-21
Recently two papers on how to perform passive neutron auto-correlation analysis on time gated histograms formed from pulse train data, generically called time correlation analysis (TCA), have appeared in this journal [1,2]. For those of us working in international nuclear safeguards these treatments are of particular interest because passive neutron multiplicity counting is a widely deployed technique for the quantification of plutonium. The purpose of this letter is to show that the skewness-variance-mean (SVM) approach developed in [1] is equivalent in terms of assay capability to the random trigger interval (RTI) analysis laid out in [2]. Mathematically we could alsomore » use other numerical ways to extract the time correlated information from the histogram data including for example what we might call the mean, mean square, and mean cube approach. The important feature however, from the perspective of real world applications, is that the correlated information extracted is the same, and subsequently gets interpreted in the same way based on the same underlying physics model.« less
Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels
Monras, Alex; Illuminati, Fabrizio
2011-01-15
We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form of compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.
Dwivedi, Gopal; Viswanathan, Vaishak; Sampath, Sanjay; Shyam, Amit; Lara-Curzio, Edgar
2014-06-09
Fracture toughness has become one of the dominant design parameters that dictates the selection of materials and their microstructure to obtain durable thermal barrier coatings (TBCs). Much progress has been made in characterizing the fracture toughness of relevant TBC compositions in bulk form, and it has become apparent that this property is significantly affected by process-induced microstructural defects. In this investigation, a systematic study of the influence of coating microstructure on the fracture toughness of atmospheric plasma sprayed (APS) TBCs has been carried out. Yttria partially stabilized zirconia (YSZ) coatings were fabricated under different spray process conditions inducing different levelsmore » of porosity and interfacial defects. Fracture toughness was measured on free standing coatings in as-processed and thermally aged conditions using the double torsion technique. Results indicate significant variance in fracture toughness among coatings with different microstructures including changes induced by thermal aging. Comparative studies were also conducted on an alternative TBC composition, Gd2Zr2O7 (GDZ), which as anticipated shows significantly lower fracture toughness compared to YSZ. Furthermore, the results from these studies not only point towards a need for process and microstructure optimization for enhanced TBC performance but also a framework for establishing performance metrics for promising new TBC compositions.« less
Kinetics of heavy oil/coal coprocessing
Szladow, A.J.; Chan, R.K.; Fouda, S.; Kelly, J.F. )
1988-01-01
A number of studies have been reported on coprocessing of coal and oil sand bitumen, petroleum residues and distillate fractions in catalytic and non-catalytic processes. The studies described the effects of feedstock characteristics, process chemistry and operating variables on the product yield and distribution; however, very few kinetic data were reported in these investigations. This paper presents the kinetic data and modeling of the CANMET coal/heavy oil coprocessing process. A number of reaction networks were evaluated for CANMET coprocessing. The final choice of model was a parallel model with some sequential characteristics. The model explained 90.0 percent of the total variance, which was considered satisfactory in view of the difficulties of modeling preasphaltenes. The models which were evaluated showed that the kinetic approach successfully applied to coal liquefaction and heavy oil upgrading can be also applied to coprocessing. The coal conversion networks and heavy oil upgrading networks are interrelated via the forward reaction paths of preasphaltenes, asphaltenes, and THFI and via the reverse kinetic paths of an adduct formation between preasphaltenes and heavy oil.
Seong W. Lee
2004-04-01
The systematic tests of the gasifier simulator were conducted in this reporting period. In the systematic test, two (2) factors were considered as the experimental parameters, including air injection rate and water injection rate. Each experimental factor had two (2) levels, respectively. A special water-feeding device was designed and installed to the gasifier simulator. Analysis of Variances (ANOVA) was applied to the results of the systematic tests. The ANOVA shows that the air injection rate did have the significant impact to the temperature measurement in the gasifier simulator. The ANOVA also shows that the water injection rate did not have the significant impact to the temperature measurements in the gasifier simulator. The ANOVA analysis also proves that the thermocouple assembly we proposed was immune to the moisture environment, the temperature measurement remained accurate in moisture environment. Within this reporting period, the vibration application for cleaning purpose was explored. Both ultrasonic and sub-sonic vibrations were considered. A feasibility test was conducted to prove that the thermocouple vibration did not have the significant impact to the temperature measurements in the gasifier simulator. This feasibility test was a 2{sup 2} factorial design. Two factors including temperature levels and motor speeds were set to two levels respectively. The sub-sonic vibration tests were applied to the thermocouple to remove the concrete cover layer (used to simulate the solid condensate in gasifiers) on the thermocouple tip. It was found that both frequency and amplitude had significant impacts on removal performance of the concrete cover layer.
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael; Stemler, Thomas
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rssler system and find that periodic dynamics translate to ring structures whereas chaotic time series translate to band or tube-like structuresthereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.
Fondeur, F.; Fink, S.
2012-08-01
During processing of Salt Batches 3 and 4 in the Modular Caustic-Side Solvent Extraction Unit (MCU), the decontamination efficiency for cesium declined from historical values and from expectations based on laboratory testing. This report documents efforts to analyze samples of solvent and process solutions from MCU in an attempt to understand the cause of the reduced performance and to recommend mitigations. CWT Solutions from MCU from the time period of variable decontamination factor (DF) performance which covers from April 2011 to September 2011 (during processing of Salt Batch 4) were examined for impurities using chromatography and spectroscopy. The results indicate that impurities were found to be of two types: aromatic containing impurities most likely from Modifier degradation and aliphatic type impurities most likely from Isopar{reg_sign} L and tri-n-octylamine (TOA) degradation. Caustic washing the Solvent Hold Tank (SHT) solution with 1M NaOH improved its extraction ability as determined from {sup 22}Na uptake tests. Evidence from this work showed that pH variance in the aqueous solutions within the range of 1M nitric acid to 1.91M NaOH that contacted the solvent samples does not influence the analytical determination of the TOA concentration by GC-MS.
Evaluation of bulk paint worker exposure to solvents at household hazardous waste collection events
Cameron, M.
1995-09-01
In fiscal year 93/94, over 250 governmental agencies were involved in the collection of household hazardous wastes in the State of California. During that time, over 3,237,000 lbs. of oil based paint were collected in 9,640 drums. Most of this was in lab pack drums, which can only hold up to 20 one gallon cans. Cost for disposal of such drums is approximately $1000. In contrast, during the same year, 1,228,000 lbs. of flammable liquid were collected in 2,098 drums in bulk form. Incineration of bulked flammable liquids is approximately $135 per drum. Clearly, it is most cost effective to bulk flammable liquids at household hazardous waste events. Currently, this is the procedure used at most Temporary Household Hazardous Waste Collection Facilities (THHWCFs). THHWCFs are regulated by the Department of Toxic Substances Control (DTSC) under the new Permit-by Rule Regulations. These regulations specify certain requirements regarding traffic flow, emergency response notifications and prevention of exposure to the public. The regulations require that THHWCF operators bulk wastes only when the public is not present. [22 CCR, section 67450.4 (e) (2) (A)].Santa Clara County Environmental Health Department sponsors local THHWCF`s and does it`s own bulking. In order to save time and money, a variance from the regulation was requested and an employee monitoring program was initiated to determine actual exposure to workers. Results are presented.
The Role of Landscape in the Distribution of Deer-Vehicle Collisions in South Mississippi
McKee, Jacob J; Cochran, David
2012-01-01
Deer-vehicle collisions (DVCs) have a negative impact on the economy, traffic safety, and the general well-being of otherwise healthy deer. To mitigate DVCs, it is imperative to gain a better understanding of factors that play a role in their spatial distribution. Much of the existing research on DVCs in the United States has been inconclusive, pointing to a variety of causal factors that seem more specific to study site and region than indicative of broad patterns. Little DVC research has been conducted in the southern United States, making the region particularly important with regard to this issue. In this study, we evaluate landscape factors that contributed to the distribution of 347 DVCs that occurred in Forrest and Lamar Counties of south Mississippi, from 2006 to 2009. Using nearest-neighbor and discriminant analysis, we demonstrate that DVCs in south Mississippi are not random spatial phenomena. We also develop a classification model that identified seven landscape metrics, explained 100% of the variance, and could distinguish DVCs from control sites with an accuracy of 81.3 percent.
Stepan, D.J.; Fraley, R.H.; Charlton, D.S.
1994-02-01
The release of elemental mercury into the environment from manometers that are used in the measurement of natural gas flow through pipelines has created a potentially serious problem for the gas industry. Regulations, particularly the Land Disposal Restrictions (LDR), have had a major impact on gas companies dealing with mercury-contaminated soils. After the May 8, 1993, LDR deadline extension, gas companies were required to treat mercury-contaminated soils by designated methods to specified levels prior to disposal in landfills. In addition, gas companies must comply with various state regulations that are often more stringent than the LDR. The gas industry is concerned that the LDRs do not allow enough viable options for dealing with their mercury-related problems. The US Environmental Protection Agency has specified the Best Demonstrated Available Technology (BDAT) as thermal roasting or retorting. However, the Agency recognizes that treatment of certain wastes to the LDR standards may not always be achievable and that the BDAT used to set the standard may be inappropriate. Therefore, a Treatability Variance Process for remedial actions was established (40 Code of Federal Regulations 268.44) for the evaluation of alternative remedial technologies. This report presents evaluations of demonstrations for three different remedial technologies: a pilot-scale portable thermal treatment process, a pilot-scale physical separation process in conjunction with chemical leaching, and a bench-scale chemical leaching process.
Goldman, A.S.
1985-05-01
This report documents and reviews the measurement control program (MCP) over a 27-month period for four solution assay instruments (SAIs) Facility. SAI measurement data collected during the period January 1982 through March 1984 were analyzed. The sources of these data included computer listings of measurements emanating from operator entries on computer terminals, logbook entries of measurements transcribed by operators, and computer listings of measurements recorded internally in the instruments. Data were also obtained from control charts that are available as part of the MCP. As a result of our analyses we observed agreement between propagated and historical variances and concluded instruments were functioning properly from a precision aspect. We noticed small, persistent biases indicating slight instrument inaccuracies. We suggest that statistical tests for bias be incorporated in the MCP on a monthly basis and if the instrument bias is significantly greater than zero, the instrument should undergo maintenance. We propose the weekly precision test be replaced by a daily test to provide more timely detection of possible problems. We observed that one instrument showed a trend of increasing bias during the past six months and recommend a randomness test be incorporated to detect trends in a more timely fashion. We detected operator transcription errors during data transmissions and advise direct instrument transmission to the MCP to eliminate these errors. A transmission error rate based on those errors that affected decisions in the MCP was estimated as 1%. 11 refs., 10 figs., 4 tabs.
Narlesky, Joshua Edward; Kelly, Elizabeth J.
2015-09-10
This report documents the new PG calibration regression equation. These calibration equations incorporate new data that have become available since revision 1 of “A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis” was issued [3] The calibration equations are based on a weighted least squares (WLS) approach for the regression. The WLS method gives each data point its proper amount of influence over the parameter estimates. This gives two big advantages, more precise parameter estimates and better and more defensible estimates of uncertainties. The WLS approach makes sense both statistically and experimentally because the variances increase with concentration, and there are physical reasons that the higher measurements are less reliable and should be less influential. The new magnesium calibration includes a correction for sodium and separate calibration equation for items with and without chlorine. These additional calibration equations allow for better predictions and smaller uncertainties for sodium in materials with and without chlorine. Chlorine and sodium have separate equations for RICH materials. Again, these equations give better predictions and smaller uncertainties chlorine and sodium for RICH materials.
Kneitel, Terri; Rocco, Diane
2012-07-01
When conducting environmental cleanup or decommissioning projects, characterization of the material to be removed is often performed when the material is in-situ. The actual demolition or excavation and removal of the material can result in individual containers that vary significantly from the original bulk characterization profile. This variance, if not detected, can result in individual containers exceeding Department of Transportation regulations or waste disposal site acceptance criteria. Bulk waste characterization processes were performed to initially characterize the Brookhaven Graphite Research Reactor (BGRR) graphite pile and this information was utilized to characterize all of the containers of graphite. When the last waste container was generated containing graphite dust from the bottom of the pile, but no solid graphite blocks, the material contents were significantly different in composition from the bulk waste characterization. This error resulted in exceedance of the disposal site waste acceptance criteria. Brookhaven Science Associates initiated an in-depth investigation to identify the root causes of this failure and to develop appropriate corrective actions. The lessons learned at BNL have applicability to other cleanup and demolition projects which characterize their wastes in bulk or in-situ and then extend that characterization to individual containers. (authors)
Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.; Brown, Joseph N.; Wang, Jing; McDermott, Jason E.; Smith, Richard D.; Rodland, Karin D.; Metz, Thomas O.; Pounds, Joel G.; et al
2015-04-09
In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less
Alternative disposal options for alpha-mixed low-level waste
Loomis, G.G.; Sherick, M.J.
1995-12-01
This paper presents several disposal options for the Department of Energy alpha-mixed low-level waste. The mixed nature of the waste favors thermally treating the waste to either an iron-enriched basalt or glass waste form, at which point a multitude of reasonable disposal options, including in-state disposal, are a possibility. Most notably, these waste forms will meet the land-ban restrictions. However, the thermal treatment of this waste involves considerable waste handling and complicated/expensive offgas systems with secondary waste management problems. In the United States, public perception of offgas systems in the radioactive incinerator area is unfavorable. The alternatives presented here are nonthermal in nature and involve homogenizing the waste with cryogenic techniques followed by complete encapsulation with a variety of chemical/grouting agents into retrievable waste forms. Once encapsulated, the waste forms are suitable for transport out of the state or for actual in-state disposal. This paper investigates variances that would have to be obtained and contrasts the alternative encapsulation idea with the thermal treatment option.
A fast contour descriptor algorithm for supernova imageclassification
Aragon, Cecilia R.; Aragon, David Bradburn
2006-07-16
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
Standard Methods of Characterizing Performance of Fan FilterUnits, Version 3.0
Xu, Tengfang
2007-01-01
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
Talamo, A.; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.
2013-07-01
MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by {sup 3}He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-{alpha}, and Feynman-{alpha}. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
Farlotti, M.; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)
Measuring kinetic energy changes in the mesoscale with low acquisition rates
Roldn, .; Martnez, I. A.; Rica, R. A.; Dinis, L.
2014-06-09
We report on the measurement of the average kinetic energy changes in isothermal and non-isothermal quasistatic processes in the mesoscale, realized with a Brownian particle trapped with optical tweezers. Our estimation of the kinetic energy change allows to access to the full energetic description of the Brownian particle. Kinetic energy estimates are obtained from measurements of the mean square velocity of the trapped bead sampled at frequencies several orders of magnitude smaller than the momentum relaxation frequency. The velocity is tuned applying a noisy electric field that modulates the amplitude of the fluctuations of the position and velocity of the Brownian particle, whose motion is equivalent to that of a particle in a higher temperature reservoir. Additionally, we show that the dependence of the variance of the time-averaged velocity on the sampling frequency can be used to quantify properties of the electrophoretic mobility of a charged colloid. Our method could be applied to detect temperature gradients in inhomogeneous media and to characterize the complete thermodynamics of biological motors and of artificial micro and nanoscopic heat engines.
Parameters affecting resin-anchored cable bolt performance: Results of in situ evaluations
Zelanko, J.C.; Mucho, T.P.; Compton, C.S.; Long, L.E.; Bailey, P.E.
1995-11-01
Cable bolt support techniques, including hardware and anchorage systems, continue to evolve to meet US mining requirements. For cable support systems to be successfully implemented into new ground control areas, the mechanics of this support and the potential range of performance need to be better understood. To contribute to this understanding, a series of 36 pull tests were performed on 10 ft long cable bolts using various combinations of hole diameters, resin formulations, anchor types, and with and without resin dams. These test provided insight as to the influence of these four parameters on cable system performance. Performance was assessed in terms of support capacity (maximum load attained in a pull test), system stiffness (assessed from two intervals of load-deformation), and from the general load-deformation response. Three characteristic load-deformation responses were observed. An Analysis of Variance identified a number of main effects and interactions of significance to support capacity and stiffness. The factorial experiment performed in this study provides insight to the effects of several design parameters associated with resin-anchored cable bolts.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posterior is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Le, Peisi; Fratini, Emiliano; Ito, Kanae; Wang, Zhe; Mamontov, Eugene; Baglioni, Piero; Chen, Sow-Hsin
2016-01-28
We present the hypothesis that the mechanical properties of cement pastes depend strongly on their porosities. In a saturated paste, the porosity links to the free water volume after hydration. Structural water, constrained water, and free water have different dynamical behavior. Hence, it should be possible to extract information on pore system by exploiting the water dynamics. With our experiments we investigated the slow dynamics of hydration water confined in calcium- and magnesium-silicate-hydrate (C-S-H and M-S-H) gels using high-resolution quasi-elastic neutron scattering (QENS) technique. C-S-H and M-S-H are the chemical binders present in calcium rich and magnesium rich cements. Wemore » measured three M-S-H samples: pure M-S-H, M-S-H with aluminum-silicate nanotubes (ASN), and M-S-H with carboxyl group functionalized ASN (ASN-COOH). A C-S-H sample with the same water content (i.e. 0.3) is also studied for comparison. We found that structural water in the gels contributes to the elastic component of the QENS spectrum, while constrained water and free water contribute the quasi-elastic component. The quantitative analysis suggests that the three components vary for different samples and indicate the variance in the system porosity, which controls the mechanical properties of cement pastes.« less
Perpinan, O.; Lorenzo, E.
2011-01-15
The irradiance fluctuations and the subsequent variability of the power output of a PV system are analysed with some mathematical tools based on the wavelet transform. It can be shown that the irradiance and power time series are nonstationary process whose behaviour resembles that of a long memory process. Besides, the long memory spectral exponent {alpha} is a useful indicator of the fluctuation level of a irradiance time series. On the other side, a time series of global irradiance on the horizontal plane can be simulated by means of the wavestrapping technique on the clearness index and the fluctuation behaviour of this simulated time series correctly resembles the original series. Moreover, a time series of global irradiance on the inclined plane can be simulated with the wavestrapping procedure applied over a signal previously detrended by a partial reconstruction with a wavelet multiresolution analysis, and, once again, the fluctuation behaviour of this simulated time series is correct. This procedure is a suitable tool for the simulation of irradiance incident over a group of distant PV plants. Finally, a wavelet variance analysis and the long memory spectral exponent show that a PV plant behaves as a low-pass filter. (author)
Statistical Analysis of Variation in the Human Plasma Proteome
Corzett, Todd H.; Fodor, Imola K.; Choi, Megan W.; Walsworth, Vicki L.; Turteltaub, Kenneth W.; McCutchen-Maloney, Sandra L.; Chromy, Brett A.
2010-01-01
Quantifying the variation in the human plasma proteome is an essential prerequisite for disease-specific biomarker detection. We report here on the longitudinal and individual variation in human plasma characterized by two-dimensional difference gel electrophoresis (2-D DIGE) using plasma samples from eleven healthy subjects collected three times over a two week period. Fixed-effects modeling was used to remove dye and gel variability. Mixed-effects modeling was then used to quantitate the sources of proteomic variation. The subject-to-subject variation represented the largest variance component, while the time-within-subject variation was comparable to the experimental variation found in a previous technical variability study where onemore » human plasma sample was processed eight times in parallel and each was then analyzed by 2-D DIGE in triplicate. Here, 21 protein spots had larger than 50% CV, suggesting that these proteins may not be appropriate as biomarkers and should be carefully scrutinized in future studies. Seventy-eight protein spots showing differential protein levels between different individuals or individual collections were identified by mass spectrometry and further characterized using hierarchical clustering. The results present a first step toward understanding the complexity of longitudinal and individual variation in the human plasma proteome, and provide a baseline for improved biomarker discovery.« less
D'Addato, Sergio; Spadaro, Maria Chiara; Luches, Paola; Valeri, Sergio; Grillo, Vincenzo; Rotunno, Enzo; Roldan Gutierrez, Manuel A.; Pennycook, Stephen J.; Ferretti, Anna Maria; Capetti, Elena; et al
2015-01-01
Films of magnetic Ni@NiO core–shell nanoparticles (NPs, core diameter d ≅ 12 nm, nominal shell thickness variable between 0 and 6.5 nm) obtained with sequential layer deposition were investigated, to gain insight into the relationships between shell thickness/morphology, core-shell interface, and magnetic properties. Different values of NiO shell thickness ts could be obtained while keeping the Ni core size fixed, at variance with conventional oxidation procedures where the oxide shell is grown at the expense of the core. Chemical composition, morphology of the as-produced samples and structural features of the Ni/NiO interface were investigated with x-ray photoelectron spectroscopy and microscopymore » (scanning electron microscopy, transmission electron microscopy) techniques, and related with results from magnetic measurements obtained with a superconducting quantum interference device. The effect of the shell thickness on the magnetic properties could be studied. The exchange bias (EB) field Hbias is small and almost constant for ts up to 1.6 nm; then it rapidly grows, with no sign of saturation. This behavior is clearly related to the morphology of the top NiO layer, and is mostly due to the thickness dependence of the NiO anisotropy constant. The ability to tune the EB effect by varying the thickness of the last NiO layer represents a step towards the rational design and synthesis of core–shell NPs with desired magnetic properties.« less
Detailed design report for an operational phase panel-closure system
1996-01-11
Under contract to Westinghouse Electric Corporation (Westinghouse), Waste Isolation Division (WID), IT Corporation has prepared a detailed design of a panel-closure system for the Waste Isolation Pilot Plant (WIPP). Preparation of this detailed design of an operational-phase closure system is required to support a Resource Conservation and Recovery Act (RCRA) Part B permit application and a non-migration variance petition. This report describes the detailed design for a panel-closure system specific to the WIPP site. The recommended panel-closure system will adequately isolate the waste-emplacement panels for at least 35 years. This report provides detailed design and material engineering specifications for the construction, emplacement, and interface-grouting associated with a panel-closure system at the WIPP repository, which would ensure that an effective panel-closure system is in place for at least 35 years. The panel-closure system provides assurance that the limit for the migration of volatile organic compounds (VOC) will be met at the point of compliance, the WIPP site boundary. This assurance is obtained through the inherent flexibility of the panel-closure system.
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Wind Measurements from Arc Scans with Doppler Wind Lidar
Wang, H.; Barthelmie, R. J.; Clifton, Andy; Pryor, S. C.
2015-11-25
When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of its high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.