National Library of Energy BETA

Sample records for variance stratum stratum3

  1. Variance Fact Sheet

    Broader source: Energy.gov [DOE]

    Variance Fact Sheet. A variance is an exception to compliance with some part of a safety and health standard granted by the Department of Energy (DOE) to a contractor

  2. Occupational Medicine Variance Request

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Justification of Variance Sought: B&W Y-12 requests a permanent variance from certain sections of 10 CFR Part 851's occupational medicine requirements for B&W Y-12 subcontractors. The specific requirements within Appendix A.8 which B&W Y-12 seeks relief from flow down to subcontractors are 10 CFR Part 851 Appendix A.8(a) and A.8(d) through A.8(k) inclusive. If the variance is granted, B&W Y-12 will continue to place the highest priority on establishing and maintaining a safe

  3. A COSMIC VARIANCE COOKBOOK

    SciTech Connect (OSTI)

    Moster, Benjamin P.; Rix, Hans-Walter [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany); Somerville, Rachel S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Newman, Jeffrey A., E-mail: moster@mpia.de, E-mail: rix@mpia.de, E-mail: somerville@stsci.edu, E-mail: janewman@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara Street, Pittsburgh, PA 15260 (United States)

    2011-04-20

    Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.

  4. Nuclear Material Variance Calculation

    Energy Science and Technology Software Center (OSTI)

    1995-01-01

    MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less

  5. Cosmology without cosmic variance

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  6. Memorandum Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)

    Broader source: Energy.gov [DOE]

    Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)

  7. SWS Variance Request Form | Department of Energy

    Energy Savers [EERE]

    Variance Request Form SWS Variance Request Form As Grantees update and revise their field standards to align with the SWS, they may discover certain specifications that cannot be implemented precisely as described in the relevant SWS. In such cases, Grantees may request a variance from the relevant SWS. To be granted a variance, the attached request form must be completed in full. Complete one form for each variance requested, which may on occasion include more than one SWS. For example,

  8. Memorandum, Approval of a Permanent Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 102 1)

    Broader source: Energy.gov [DOE]

    Approval of a Permanenet Variance Regarding Static Magnetic Fields at Brookhaven National Laboratory (Variance 1021)

  9. U.S. Energy Information Administration (EIA) Indexed Site

    File 3: Operating Hours (cb86f03.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. Regular operating hours REGHRS3 31- 31 $YESNO. C-5 Monday thru Friday opening

  10. U.S. Energy Information Administration (EIA) Indexed Site

    File 4: Building Shell, Equipment, Energy Audits, and "Ohter" Conservation Features (cb86f04.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year

  11. U.S. Energy Information Administration (EIA) Indexed Site

    6: End Uses of Minor Energy Sources (cb86f06.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2 Year construction was completed YRCONC3 31- 32 $YRCONC.

  12. U.S. Energy Information Administration (EIA) Indexed Site

    File12: Imputation Flags for Summary Data, Building Activity, Operating Hours, Shell and Equipment (cb86f12.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. B-2 Square footage SQFTC3 25- 26 $SQFTC. Principal building activity PBA3 28- 29 $ACTIVTY. D-2

  13. T:\ClearanceEMEUConsumption\cbecs\pubuse86\txt\cb86sasfmt&layout.txt

    U.S. Energy Information Administration (EIA) Indexed Site

    6/txt/cb86sasfmt&layout.txt[3/17/2009 4:43:14 PM] File 1: Summary File (cb86f01.csv) Ques- tion- naire Variable Variable Variable Variable item Description Name Position Format Building identifier BLDGID3 1- 5 Adjusted weight ADJWT3 7- 14 Variance stratum STRATUM3 16- 17 Pair member PAIR3 19- 19 Census region REGION3 21- 21 $REGION. Census division CENDIV3 23- 23 $CENDIV. Metropolitan statistical area MSA3 25- 25 $MSA. Climate zone CLIMATE3 27- 27 $CLIMAT. B-1 Square footage SQFT3 29- 35

  14. The Theory of Variances in Equilibrium Reconstruction

    SciTech Connect (OSTI)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The #27;σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  15. Reduction of Emission Variance by Intelligent Air Path Control | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Emission Variance by Intelligent Air Path Control Reduction of Emission Variance by Intelligent Air Path Control This poster describes an air path control concept, which minimizes NOx and PM emission variance while having the ability to run reliably with many different sensor configurations. PDF icon p-17_nanjundaswamy.pdf More Documents & Publications Further improvement of conventional diesel NOx aftertreatment concepts as pathway for SULEV Future Directions in Engines and

  16. Hawaii Application for Community Noise Variance (DOH Form) |...

    Open Energy Info (EERE)

    Application for Community Noise Variance Organization State of Hawaii Department of Health Published Publisher Not Provided, 072013 DOI Not Provided Check for DOI availability:...

  17. Hawaii Variance from Pollution Control Permit Packet (Appendix...

    Open Energy Info (EERE)

    Variance from Pollution Control Permit Packet (Appendix S-13) Jump to: navigation, search OpenEI Reference LibraryAdd to library PermittingRegulatory Guidance - Supplemental...

  18. A Clock Synchronization Strategy for Minimizing Clock Variance...

    Office of Scientific and Technical Information (OSTI)

    The technique is designed to minimize variance from a reference chimer during runtime and with minimal time-request latency. Our scheme permits initial unbounded variations in time ...

  19. Hawaii Guide for Filing Community Noise Variance Applications...

    Open Energy Info (EERE)

    Applications. State of Hawaii. Guide for Filing Community Noise Variance Applications. 4p. GuideHandbook sent to Retrieved from "http:en.openei.orgwindex.php?titleHawaiiGu...

  20. A Clock Synchronization Strategy for Minimizing Clock Variance at Runtime

    Office of Scientific and Technical Information (OSTI)

    in High-end Computing Environments (Conference) | SciTech Connect A Clock Synchronization Strategy for Minimizing Clock Variance at Runtime in High-end Computing Environments Citation Details In-Document Search Title: A Clock Synchronization Strategy for Minimizing Clock Variance at Runtime in High-end Computing Environments We present a new software-based clock synchronization scheme designed to provide high precision time agreement among distributed memory nodes. The technique is designed

  1. Smoothing method aids gas-inventory variance trending

    SciTech Connect (OSTI)

    Mason, R.G. )

    1992-03-23

    This paper reports on a method for determining gas-storage inventory and variance in a natural-gas storage field which uses the equations developed to determine gas-in-place in a production field. The calculations use acquired data for shut-in pressures, reservoir pore volume, and storage gas properties. These calculations are then graphed and trends are developed. Evaluating trends in inventory variance can be enhanced by use of a technique, described here, that smooths the peaks and valleys of an inventory-variance curve. Calculations using the acquired data determine inventory for a storage field whose drive mechanism is gas expansion (that is, volumetric). When used for a dry gas, condensate, or gas-condensate reservoir, the formulas require no further modification. Inventory in depleted oil fields can be determined in this same manner, as well. Some additional calculations, however, must be made to assess the influence of oil production on the gas-storage process.

  2. Fringe biasing: A variance reduction technique for optically thick meshes

    SciTech Connect (OSTI)

    Smedley-Stevenson, R. P.

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  3. A Hybrid Variance Reduction Method Based on Gaussian Process for Core Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hybrid Variance Reduction Method Based on Gaussian Process for Core Simulation Zeyun Wu, Qiong Zhang and Hany S. Abdel-Khalik Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 {zwu3, qzhang7, abdelkhalik}@ncsu.edu INTRODUCTION Variance reduction techniques is usually employed to accelerate the convergence of Monte Carlo (MC) simulation. Hybrid deterministic-MC methods [1, 2, 3] have been recently developed to achieve the goal of global variance reduction.

  4. Development of a treatability variance guidance document for US DOE mixed-waste streams

    SciTech Connect (OSTI)

    Scheuer, N.; Spikula, R. ); Harms, T. . Environmental Guidance Div.); Triplett, M.B. )

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs.

  5. EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports | Department

    Office of Environmental Management (EM)

    of Energy 4 PARSII Analysis: Variance Reports EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports This EVMS Training Snippet, sponsored by the Office of Project Management (PM) is one in a series regarding PARS II Analysis reports. PARS II offers direct insight into EVM project data from the contractor's internal systems. The reports were developed with the users in mind, organized and presented in an easy to follow manner, with analysis results and key information to determine the

  6. Development of guidance for variances from the RCRA Land Disposal Restrictions for US DOE mixed-waste streams

    SciTech Connect (OSTI)

    Scheuer, N.; Spikula, R. ); Harms, T. . Environmental Guidance Div.); Triplett, M.B. )

    1990-02-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a guidance manual was prepared. The guidance manual is for use by DOE facilities and operations offices in obtaining variances from the RCRA LDR treatment standards. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA LDRs. The manual addresses treatability variances and equivalent treatment variances. A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. An equivalent treatment variance is granted by EPA that allows treatment of a restricted waste by a process that differs from that specified in the standards, but achieves a level of performance equivalent to the technology specified in the standard. 4 refs.

  7. Waste Isolation Pilot Plant no-migration variance petition. Executive summary

    SciTech Connect (OSTI)

    Not Available

    1990-12-31

    Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ``no-migration`` demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989.

  8. Microsoft PowerPoint - Snippet 5.4 PARS II Analysis-Variance Analysis Reports 20140627 [Compatibility Mode]

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    variances at both the Performance Measurement Baseline level and at the Work Breakdown Structure levels. In PARS II under the SSS Reports selection on the left, there are folders to the right. The reports being discussed are in the Analysis Reports folder. That folder is broken down into various subfolders pertaining to OAPM's EVMS Project Analysis Standard Operating Procedure (EPASOP). This Snippet covers the subfolder named Variance Analysis. These reports are useful for anyone responsible for

  9. Memorandum, Request for Concurrence on fire Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory

    Office of Energy Efficiency and Renewable Energy (EERE)

    Request for Concurrence on Three Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory

  10. No-migration variance petition: Draft. Volume 4, Appendices DIF, GAS, GCR (Volume 1)

    SciTech Connect (OSTI)

    1995-05-31

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2.6 million cubic feet of the se waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 4 of the petition which presents details about the transport characteristics across drum filter vents and polymer bags; gas generation reactions and rates during long-term WIPP operation; and geological characterization of the WIPP site.

  11. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    SciTech Connect (OSTI)

    Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; McLeod, Derek

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ?35% at redshift z ? 7 to ? 65% at z ? 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  12. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    SciTech Connect (OSTI)

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  13. Waste Isolation Pilot Plant No-Migration Variance Petition. Revision 1, Volume 1

    SciTech Connect (OSTI)

    Hunt, Arlen

    1990-03-01

    The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA`s NOD and met with the EPA`s reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0.

  14. Horizontal-Velocity and Variance Measurements in the Stable Boundary Layer Using Doppler Lidar: Sensitivity to Averaging Procedures

    SciTech Connect (OSTI)

    Pichugina, Yelena L.; Banta, Robert M.; Kelley, Neil D.; Jonkman, Bonnie J.; Tucker, Sara C.; Newsom, Rob K.; Brewer, W. A.

    2008-08-01

    Quantitative data on turbulence variables aloft--above the region of the atmosphere conveniently measured from towers--has been an important but difficult measurement need for advancing understanding and modeling of the stable boundary layer (SBL). Vertical profiles of streamwise velocity variances obtained from NOAAs High Resolution Doppler Lidar (HRDL), which have been shown to be numerically equivalent to turbulence kinetic energy (TKE) for stable conditions, are a measure of the turbulence in the SBL. In the present study, the mean horizontal wind component U and variance ?u2 were computed from HRDL measurements of the line-of-sight (LOS) velocity using a technique described in Banta, et al. (2002). The technique was tested on datasets obtained during the Lamar Low-Level Jet Project (LLLJP) carried out in early September 2003, near the town of Lamar in southeastern Colorado. This paper compares U with mean wind speed obtained from sodar and sonic anemometer measurements. It then describes several series of averaging tests that produced the best correlation between TKE calculated from sonic anemometer data at several tower levels and lidar measurements of horizontal velocity variance ?u2. The results show high correlation (0.71-0.97) of the mean U and average wind speed measured by sodar and in-situ instruments, independent of sampling strategies and averaging procedures. Comparison of estimates of variance, on the other hand, proved sensitive to both the spatial and temporal averaging techniques.

  15. No-migration variance petition for the Waste Isolation Pilot Plant

    SciTech Connect (OSTI)

    Carnes, R.G.; Hart, J.S. ); Knudtsen, K. )

    1990-01-01

    The Waste Isolation Pilot Plant (WIPP) is a US Department of Energy (DOE) project to provide a research and development facility to demonstrate the safe disposal of radioactive waste resulting from US defense activities and programs. The DOE is developing the WIPP facility as a deep geologic repository in bedded salt for transuranic (TRU) waste currently stored at or generated by DOE defense installations. Approximately 60 percent of the wastes proposed to be emplaced in the WIPP are radioactive mixed wastes. Because such mixed wastes contain a hazardous chemical component, the WIPP is subject to requirements of the Resource Conservation and Recovery Act (RCRA). In 1984 Congress amended the RCRA with passage of the Hazardous and Solid Waste Amendments (HSWA), which established a stringent regulatory program to prohibit the land disposal of hazardous waste unless (1) the waste is treated to meet treatment standards or other requirements established by the Environmental Protection Agency (EPA) under {section}3004(n), or (2) the EPA determines that compliance with the land disposal restrictions is not required in order to protect human health and the environment. The DOE WIPP Project Office has prepared and submitted to the EPA a no-migration variance petition for the WIPP facility. The purpose of the petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the WIPP facility for as long as the wastes remain hazardous. This paper provides an overview of the petition and describes the EPA review process, including key issues that have emerged during the review. 5 refs.

  16. Memorandum Request for Concurrence on firee Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory

    Office of Energy Efficiency and Renewable Energy (EERE)

    Memorandum Request for Concurrence on firee Temporary Variance Applications Regarding Fire Protection and Pressure Safety at the Oak Ridge National Laboratory

  17. Memorandum CH2M WG Idaho, LLC, Request for Variance to Title 10, Code of Federal Regulations Part 851, "Worker Safety and Health Program"

    Broader source: Energy.gov [DOE]

    Memorandum CH2M WG Idaho, LLC, Request for Variance to Title 10, Code of Federal Regulations Part 851, "Worker Safety and Health Program"

  18. Memorandum Approval of a Permanent Variance Regarding Sprinklers and Fire Boundaries in Selected Areas of 22 1-H Canyon at the Savannah River Site

    Broader source: Energy.gov [DOE]

    Approval of a Permanent Variance Regarding Sprinklers and Fire Boundaries in Selected Areas of 22 1 -H Canyon at the Savannah River Site

  19. Memorandum, Approval of a Permanent Variance Regarding Sprinklers and Fire Boundaries in Selected Areas of 22 1-H Canyon at the Savannah River Site

    Broader source: Energy.gov [DOE]

    Approval of a Permanent Variance Regarding Fire Safety in Selected Areas of 221-H Canyon at the Savannah River Site UNDER SECRETARY OF ENERGY

  20. Memorandum, CH2M HG Idaho, LLC, Request for Variance to Title 10 Code of Federal Regulations part 851, "Worker Safety and Health"

    Broader source: Energy.gov [DOE]

    CH2M HG Idaho, LLC, Request for Variance to Title 10 Code of Federal Regulations part 851, "Worker Safety and Health"

  1. Horizontal Velocity and Variance Measurements in the Stable Boundary Layer Using Doppler Lidar: Sensitivity to Averaging Procedures

    SciTech Connect (OSTI)

    Pichugina, Y. L.; Banta, R. M.; Kelley, N. D.; Jonkman, B. J.; Tucker, S. C.; Newsom, R. K.; Brewer, W. A.

    2008-08-01

    Quantitative data on turbulence variables aloft--above the region of the atmosphere conveniently measured from towers--have been an important but difficult measurement need for advancing understanding and modeling of the stable boundary layer (SBL). Vertical profiles of streamwise velocity variances obtained from NOAA's high-resolution Doppler lidar (HRDL), which have been shown to be approximately equal to turbulence kinetic energy (TKE) for stable conditions, are a measure of the turbulence in the SBL. In the present study, the mean horizontal wind component U and variance {sigma}2u were computed from HRDL measurements of the line-of-sight (LOS) velocity using a method described by Banta et al., which uses an elevation (vertical slice) scanning technique. The method was tested on datasets obtained during the Lamar Low-Level Jet Project (LLLJP) carried out in early September 2003, near the town of Lamar in southeastern Colorado. This paper compares U with mean wind speed obtained from sodar and sonic anemometer measurements. The results for the mean U and mean wind speed measured by sodar and in situ instruments for all nights of LLLJP show high correlation (0.71-0.97), independent of sampling strategies and averaging procedures, and correlation coefficients consistently >0.9 for four high-wind nights, when the low-level jet speeds exceeded 15 m s{sup -1} at some time during the night. Comparison of estimates of variance, on the other hand, proved sensitive to both the spatial and temporal averaging parameters. Several series of averaging tests are described, to find the best correlation between TKE calculated from sonic anemometer data at several tower levels and lidar measurements of horizontal-velocity variance {sigma}{sup 2}{sub u}. Because of the nonstationarity of the SBL data, the best results were obtained when the velocity data were first averaged over intervals of 1 min, and then further averaged over 3-15 consecutive 1-min intervals, with best results for the 10- and 15-min averaging periods. For these cases, correlation coefficients exceeded 0.9. As a part of the analysis, Eulerian integral time scales ({tau}) were estimated for the four high-wind nights. Time series of {tau} through each night indicated erratic behavior consistent with the nonstationarity. Histograms of {tau} showed a mode at 4-5 s, but frequent occurrences of larger {tau} values, mostly between 10 and 100 s.

  2. ARM - Publications: Science Team Meeting Documents: Variance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    in shallow cumulus topped mixed layers is studied using large-eddy simulation (LES) results. The simulations are based on a range of different shallow cumulus cases,...

  3. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    SciTech Connect (OSTI)

    Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  4. Estimating pixel variances in the scenes of staring sensors

    DOE Patents [OSTI]

    Simonson, Katherine M. (Cedar Crest, NM); Ma, Tian J. (Albuquerque, NM)

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  5. Permits and Variances for Solar Panels, Calculation of Impervious...

    Broader source: Energy.gov (indexed) [DOE]

    construction, or stormwater may only include the foundation or base supporting the solar panel. The law generally applies statewide, including charter counties and Baltimore...

  6. EVMS Training Snippet: 5.4 PARSII Analysis: Variance Reports...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    sponsored by the Office of Project Management (PM) is one in a series regarding PARS II Analysis reports. PARS II offers direct insight into EVM project data from the...

  7. A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN

    SciTech Connect (OSTI)

    Shine, G.

    2014-11-25

    The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple mean of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.

  8. Orthogonal control of expression mean and variance by epigenetic features at different genomic loci

    SciTech Connect (OSTI)

    Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; Schaffer, David V.; Arkin, Adam P.

    2015-05-05

    While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH and protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.

  9. No-migration variance petition. Appendices C--J: Volume 5, Revision 1

    SciTech Connect (OSTI)

    Not Available

    1990-03-01

    Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant.

  10. No-migration variance petition. Appendix B, Attachments E--Q: Volume 4, Revision 1

    SciTech Connect (OSTI)

    Not Available

    1990-03-01

    Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program.

  11. No-migration variance petition. Appendices A--B: Volume 2, Revision 1

    SciTech Connect (OSTI)

    Not Available

    1990-03-01

    Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included.

  12. Orthogonal control of expression mean and variance by epigenetic features at different genomic loci

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; Schaffer, David V.; Arkin, Adam P.

    2015-05-05

    While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH andmore » protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.« less

  13. Illinois Waiver letter on variances from UL ruling on E85 dispensers

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

  14. No-migration variance petition. Appendices K--O, Response to notice of deficiencies: Volume 6, Revision 1

    SciTech Connect (OSTI)

    Fischer, N.T.

    1990-03-01

    This document reports data collected as part of the Ecological Monitoring Program (EMP) at the Waste Isolation Pilot Plant near Carlsbad, New Mexico, for Calendar Year 1987. Also included are data from the last quarter (October through December) of 1986. This report divides data collection activities into two parts. Part A covers general environmental monitoring which includes meteorology, aerial photography, air quality monitoring, water quality monitoring, and wildlife population surveillance. Part B focuses on the special studies being performed to evaluate the impacts of salt dispersal from the site on the surrounding ecosystem. The fourth year of salt impact monitoring was completed in 1987. These studies involve the monitoring of soil chemistry, soil microbiota, and vegetation in permanent study plots. None of the findings indicate that the WIPP project is adversely impacting environmental quality at the site. As in 1986, breeding bird censuses completed this year indicate changes in the local bird fauna associated with the WIPP site. The decline in small mammal populations noted in the 1986 census is still evident in the 1987 data; however, populations are showing signs of recovery. There is no indication that this decline is related to WIPP activities. Rather, the evidence indicates that natural population fluctuations may be common in this ecosystem. The salt impact studies continue to reveal some short-range transport of salt dust from the saltpiles. This material accumulates at or near the soil surface during the dry seasons in areas near the saltpiles, but is flushed deeper into the soil during the rainy season. Microbial activity does not appear to be affected by this salt importation. Vegetation coverage and density data from 1987 also do not show any detrimental effect associated with aerial dispersal of salt.

  15. Status of Entire 10 CFR 851 as a New Safety and Health Standard that Qualifies for a Temporary Variance

    Broader source: Energy.gov [DOE]

    Letter to Joseph N. Herndon from Bruce M. Diamond, Assistant General Counsel for Environment, dated September 19, 2008.

  16. THE END OF HELIUM REIONIZATION AT z {approx_equal} 2.7 INFERRED FROM COSMIC VARIANCE IN HST/COS He II Ly{alpha} ABSORPTION SPECTRA

    SciTech Connect (OSTI)

    Worseck, Gabor; Xavier Prochaska, J. [Department of Astronomy and Astrophysics, UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); McQuinn, Matthew [Department of Astronomy, University of California, 601 Campbell Hall, Berkeley, CA 94720 (United States); Dall'Aglio, Aldo; Wisotzki, Lutz [Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam (Germany); Fechner, Cora; Richter, Philipp [Institut fuer Physik und Astronomie, Universitaet Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam (Germany); Hennawi, Joseph F. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany); Reimers, Dieter, E-mail: gworseck@ucolick.org [Hamburger Sternwarte, Universitaet Hamburg, Gojenbergsweg 112, 21029 Hamburg (Germany)

    2011-06-01

    We report on the detection of strongly varying intergalactic He II absorption in HST/COS spectra of two z{sub em} {approx_equal} 3 quasars. From our homogeneous analysis of the He II absorption in these and three archival sightlines, we find a marked increase in the mean He II effective optical depth from <{tau}{sub eff},He{sub ii}>{approx_equal}1 at z {approx_equal} 2.3 to <{tau}{sub eff},He{sub ii}>{approx}>5 at z {approx_equal} 3.2, but with a large scatter of 2{approx}<{tau}{sub eff},He{sub ii}{approx}<5 at 2.7 < z < 3 on scales of {approx}10 proper Mpc. This scatter is primarily due to fluctuations in the He II fraction and the He II-ionizing background, rather than density variations that are probed by the coeval H I forest. Semianalytic models of He II absorption require a strong decrease in the He II-ionizing background to explain the strong increase of the absorption at z {approx}> 2.7, probably indicating He II reionization was incomplete at z{sub reion} {approx}> 2.7. Likewise, recent three-dimensional numerical simulations of He II reionization qualitatively agree with the observed trend only if He II reionization completes at z{sub reion} {approx_equal} 2.7 or even below, as suggested by a large {tau}{sub eff},He{sub ii}{approx}>3 in two of our five sightlines at z < 2.8. By doubling the sample size at 2.7 {approx}< z {approx}< 3, our newly discovered He II sightlines for the first time probe the diversity of the second epoch of reionization when helium became fully ionized.

  17. --No Title--

    U.S. Energy Information Administration (EIA) Indexed Site

    File02: (file02cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage SQFTC17. BCWM2C Principal activity BCWOM25. ...

  18. Y-12s Training and Technology instructors story ? Terry...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    storied about things that took place at TAT. There were people from every race, every religion and every social stratum there so you can imagine. Most of them, however, can't be...

  19. Method for in situ heating of hydrocarbonaceous formations

    DOE Patents [OSTI]

    Little, William E. (Morgantown, WV); McLendon, Thomas R. (Laramie, WY)

    1987-01-01

    A method for extracting valuable constituents from underground hydrocarbonaceous deposits such as heavy crude tar sands and oil shale is disclosed. Initially, a stratum containing a rich deposit is hydraulically fractured to form a horizontally extending fracture plane. A conducting liquid and proppant is then injected into the fracture plane to form a conducting plane. Electrical excitations are then introduced into the stratum adjacent the conducting plate to retort the rich stratum along the conducting plane. The valuable constituents from the stratum adjacent the conducting plate are then recovered. Subsequently, the remainder of the deposit is also combustion retorted to further recover valuable constituents from the deposit. Various R.F. heating systems are also disclosed for use in the present invention.

  20. RAPID/Roadmap/18-HI-d | Open Energy Information

    Open Energy Info (EERE)

    Geothermal Hydropower Solar Tools Contribute Contact Us Variance from Pollution Control (18-HI-d) A variance is required to discharge water pollutant in excess of applicable...

  1. EVENT TREE ANALYSIS AT THE SAVANNAH RIVER SITE: A CASE HISTORY

    SciTech Connect (OSTI)

    Williams, R

    2009-05-25

    At the Savannah River Site (SRS), a Department of Energy (DOE) installation in west-central South Carolina there is a unique geologic stratum that exists at depth that has the potential to cause surface settlement resulting from a seismic event. In the past the particular stratum in question has been remediated via pressure grouting, however the benefits of remediation have always been debatable. Recently the SRS has attempted to frame the issue in terms of risk via an event tree or logic tree analysis. This paper describes that analysis, including the input data required.

  2. GPU Acceleration of Mean Free Path Based Kernel Density Estimators in Monte Carlo Neutronics Simulations with Curvilinear Geometries

    SciTech Connect (OSTI)

    Burke, Timothy Patrick; Kiedrowski, Brian; Martin, William R.; Brown, Forrest B.

    2015-08-27

    KDEs show potential reducing variance for global solutions (flux, reaction rates) when compared to histogram solutions.

  3. Determination of Dusty Particle Charge Taking into Account Ion Drag

    SciTech Connect (OSTI)

    Ramazanov, T. S.; Dosbolayev, M. K.; Jumabekov, A. N.; Amangaliyeva, R. Zh.; Orazbayev, S. A.; Petrov, O. F.; Antipov, S. N.

    2008-09-07

    This work is devoted to the experimental estimation of charge of dust particle that levitates in the stratum of dc glow discharge. Particle charge is determined on the basis of the balance between ion drag force, gravitational and electric forces. Electric force is obtained from the axial distribution of the light intensity of strata.

  4. Breakthrough antibacterial approach could resolve serious skin infections

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Antibacterial approach could resolve skin infections Breakthrough antibacterial approach could resolve serious skin infections Like a protective tent over a colony of harmful bacteria, biofilms make the treatment of skin infections especially difficult. August 26, 2014 Artist's rendition of a cross section of skin layers (stratum corneum, epidermis and dermis) showing topical application of an ionic liquid for combating a skin-borne bacterial infection. The ionic liquid can be formulated with

  5. Gauging apparatus and method, particularly for controlling mining by a mining machine

    SciTech Connect (OSTI)

    Campbell, J.A.; Moynihan, D.J.

    1980-04-29

    Apparatus for and method are claimed for controlling the mining by a mining machine of a seam of material (e.g., coal) overlying or underlying a stratum of undesired material (e.g., clay) to reduce the quantity of undesired material mined with the desired material, the machine comprising a cutter movable up and down and adapted to cut down into a seam of coal on being lowered. The control apparatus comprises a first electrical signal constituting a slow-down signal adapted to be automatically operated to signal when the cutter has cut down into a seam of desired material generally to a predetermined depth short of the interface between the seam and the underlying stratum for slowing down the cutting rate as the cutter approaches the interface, and a second electrical signal adapted to be automatically operated subsequent to the first signal for signalling when the cutter has cut down through the seam to the interface for stopping the cutting operation, thereby to avoid mining undesired material with the desired material. Similar signalling may be provided on an upward cut to avoid cutting into the overlying stratum.

  6. Parameters Covariance in Neutron Time of Flight Analysis Explicit Formulae

    SciTech Connect (OSTI)

    Odyniec, M.; Blair, J.

    2014-12-01

    We present here a method that estimates the parameters variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.

  7. Clock Synchronization in High-end Computing Environments: A Strategy for

    Office of Scientific and Technical Information (OSTI)

    Minimizing Clock Variance at Runtime (Journal Article) | SciTech Connect Clock Synchronization in High-end Computing Environments: A Strategy for Minimizing Clock Variance at Runtime Citation Details In-Document Search Title: Clock Synchronization in High-end Computing Environments: A Strategy for Minimizing Clock Variance at Runtime We present a new software-based clock synchronization scheme that provides high precision time agreement among distributed memory nodes. The technique is

  8. International Conference

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A GENERAL TRANSFORM FOR VARIANCE REDUCTION IN MONTE CARLO SIMULATIONS T.L. Becker Knolls Atomic Power Laboratory Schenectady, New York 12301 troy.becker@unnpp.gov E.W. Larsen Department of Nuclear Engineering University of Michigan Ann Arbor, Michigan 48109-2104 edlarsen@umich.edu ABSTRACT This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance

  9. Slide 1

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Layer-averaged vertical velocity, 5-pt smooth Examples: Dynamical properties Derived from radar Doppler spectra and time-variance of radar velocity. A few summary statistics for...

  10. Fuel cell stack monitoring and system control

    DOE Patents [OSTI]

    Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.

    2004-02-17

    A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell.

  11. Module 6 - Metrics, Performance Measurements and Forecasting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    This module focuses on the metrics and performance measurement tools used in Earned Value. This module reviews metrics such as cost and schedule variance along with cost and ...

  12. Do financial investors destabilize the oil price?

    Gasoline and Diesel Fuel Update (EIA)

    ... Moreover, we test whether the ine cient ...nancial trading shock (iii) increased the ... To test for this, we generate the variance decomposition and the historical decomposition ...

  13. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (OSTI)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  14. Nevada State Environmental Commission | Open Energy Information

    Open Energy Info (EERE)

    variance requests is selected program areas administrated by NDEP as well as ratify air pollution enforcement actions (settlement agreements). Nevada State Environmental...

  15. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    vertical and horizontal components, variance and vertical flux of the prognostic thermodynamic variables as well as momentum flux are also presented. The most interesting aspect...

  16. Section 83

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    spectrum (exponent 53). 2. A WP model, in which the upper boundary is an independent Gaussian process (mean and variance ) with exponential correlation function (correlation...

  17. Untitled

    U.S. Energy Information Administration (EIA) Indexed Site

    Proceedings from the ACEEE Summer Study on Energy Efficiency in Buildings, 1992 17. Error terms are heteroscedastic when the variance of the error terms is not constant but,...

  18. 2015 Annual Merit Review, Vehicle Technologies Office

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    The criteria means and average of the project variances values for each subprogram area (e.g., Hybrid and Vehicle Systems Technologies, Advanced Combustion Engine Technologies, ...

  19. X:\\ARM_19~1\\P113-137.WPD

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Research Community Climate Model (CCM2). The CSU eterized in terms of the grid cell mean and subgrid RAMS cloud microphysics parameterization predicts mass variance of...

  20. Boundary Layer Cloud Turbulence Characteristics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (10) Modeling Need (10) Cloud Boundaries 9 9 Cloud Fraction Variance Skewness UpDowndraft coverage Dominant Freq. signal Dissipation rate ??? Observation-Modeling Interface...

  1. TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...

    Office of Scientific and Technical Information (OSTI)

    and variance that was accurate within for all variables except atmospheric pressure wind speed and precipitation Correlations between downscaled output and the expected...

  2. "Title","Creator/Author","Publication Date","OSTI Identifier...

    Office of Scientific and Technical Information (OSTI)

    and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected...

  3. Mercury In Soils Of The Long Valley, California, Geothermal System...

    Open Energy Info (EERE)

    Additional samples were collected in an analysis of variance design to evaluate natural variability in soil composition with sampling interval distance. The primary...

  4. Slide 1

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    X Cum CPI 3 Period Moving Average 06302013 07312013 08312013 09302013 1031... format * Advantage of this report is Excel Sort feature to view variances from ...

  5. MSC Monthly Performance Report

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... F-1 G GENERAL AND ADMINISTRATIVE STATUS ......Cost Variance D&D Deactivation and Decommissioning DAFW Days Away from Work DBT Design ...

  6. MSC Monthly Performance Report

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    10 5.0 RELIABILITY PROJECT STATUS ......cost variance D&D Deactivation and Decommissioning DAFW Days Away from Work DBT Design ...

  7. Monthly Performance Report

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... CV cost variance. D&D Deactivation and Decommissioning. FY fiscal year. SV ... of Project L-685. November Performance will give a better picture of the project status. ...

  8. Monthly Performance Report

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... CV cost variance. D&D Deactivation and Decommissioning. FY fiscal year. EAC ... August 9, 2010. Energy Management Project Status Report - In accordance with the Mission ...

  9. MSC Monthly Performance Report

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... G-1 H CONTINUITY OF SERVICE ABSENCE ADDER STATUS ......Cost Variance D&D Deactivation and Decommissioning DAFW Days Away from Work DBT Design ...

  10. Quantum Monte Carlo for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Avoids Born-Oppenheimer approximation Distinctive feature of QMC Theory is straightforward but needs good wave functions. Sidesteps variance problem associated with...

  11. Natural Gas Weekly Update

    Gasoline and Diesel Fuel Update (EIA)

    Gas Company, for example, on Tuesday, October 21, issued a system overrun limitation (SOL) that allows for penalties on variances between flows and nominations. The SOL is in...

  12. Natural Gas Weekly Update, Printer-Friendly Version

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    Gas Company, for example, on Tuesday, October 21, issued a system overrun limitation (SOL) that allows for penalties on variances between flows and nominations. The SOL is in...

  13. Clock Synchronization in High-end Computing Environments: A Strategy...

    Office of Scientific and Technical Information (OSTI)

    The technique is designed to minimize variance from a reference chimer during runtime and with minimal time-request latency. Our scheme permits initial unbounded variations in time ...

  14. Sub-daily Statistical Downscaling of Meteorological Variables...

    Office of Scientific and Technical Information (OSTI)

    and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected...

  15. U.S. Energy Information Administration (EIA) Indexed Site

    File02: (file02_cb83.csv) BLDGID2 Building ID STR402 Half-sample stratum PAIR402 Half-sample pair number SQFTC2 Square footage $SQFTC17. BCWM2C Principal activity $BCWOM25. YRCONC2C Year constructed $YRCONC15 REGION2 Census region $REGION13 XSECWT2 Cross-sectional weight ELSUPL2N Supplier reported electricity use $YESNO15. NGSUPL2N Supplier reported natural gas use $YESNO15. FKSUPL2N Supplier reported fuel oil use $YESNO15. STSUPL2N Supplier reported steam use $YESNO15. PRSUPL2N Supplier

  16. Geothermal Glossary | Department of Energy

    Office of Environmental Management (EM)

    Glossary Geothermal Glossary This list contains terms related to geothermal energy and technologies./ A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z A Ambient Natural condition of the environment at any given time. / Aquifer Water-bearing stratum of permeable sand, rock, or gravel./ Back to Top/ B Baseload Plants Electricity-generating units that are operated to meet the constant or minimum load on the system. The cost of energy from such

  17. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.

    2015-11-24

    Several errors occur when a traditional Doppler-beam swinging (DBS) or velocityazimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates somemoreof the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.less

  18. Structure and method for controlling band offset and alignment at a crystalline oxide-on-semiconductor interface

    DOE Patents [OSTI]

    McKee, Rodney A.; Walker, Frederick J.

    2003-11-25

    A crystalline oxide-on-semiconductor structure and a process for constructing the structure involves a substrate of silicon, germanium or a silicon-germanium alloy and an epitaxial thin film overlying the surface of the substrate wherein the thin film consists of a first epitaxial stratum of single atomic plane layers of an alkaline earth oxide designated generally as (AO).sub.n and a second stratum of single unit cell layers of an oxide material designated as (A'BO.sub.3).sub.m so that the multilayer film arranged upon the substrate surface is designated (AO).sub.n (A'BO.sub.3).sub.m wherein n is an integer repeat of single atomic plane layers of the alkaline earth oxide AO and m is an integer repeat of single unit cell layers of the A'BO.sub.3 oxide material. Within the multilayer film, the values of n and m have been selected to provide the structure with a desired electrical structure at the substrate/thin film interface that can be optimized to control band offset and alignment.

  19. Fuel cell stack monitoring and system control

    DOE Patents [OSTI]

    Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.

    2005-01-25

    A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell. Other polarization curves may be generated and used for fuel cell stack monitoring based on different operating pressures, temperatures, hydrogen quantities.

  20. Twin Groves Wind Energy Facility Cut-in Speeds

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... turbines were compared using a one-way analysis of variance (ANOVA) and a Tukey's test. ...turbine, idling turbines 8.1 3.1 batsturbine; ANOVA, F 2, 26 6.34, P 0.006). ...

  1. Using RSI format

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    spread, the energy spectrum of atoms scattered by angle 1 is approximately Gaussian, with a variance and a centroid E c given by 2 2E 0 T i 2 , E c...

  2. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    best represented by power laws in the scale-parameter; 2) "intermittency" hence non-Gaussian statistics, i.e., not reducible to means, variances and covariances; and 3)...

  3. Factors Controlling The Geochemical Evolution Of Fumarolic Encrustatio...

    Open Energy Info (EERE)

    Smokes (VTTS). The six-factor solution model explains a large proportion (low of 74% for Ni to high of 99% for Si) of the individual element data variance. Although the primary...

  4. Posters A Stratiform Cloud Parameterization for General Circulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    P(w) is the probability distribution of vertical velocity, determined from the predicted mean and variance of vertical velocity. Application to a Single-Column Model To test the...

  5. Solar and Wind Easements & Local Option Rights Laws

    Broader source: Energy.gov [DOE]

    Minnesota law also allows local zoning boards to restrict development for the purpose of protecting access to sunlight. In addition, zoning bodies may create variances in zoning rules in...

  6. August 2012 Electrical Safety Occurrences

    Energy Savers [EERE]

    was the path of the light circuit as depicted on the site map. The locate did give a true signal of depth and variance of an underground utility. When the excavation, which was...

  7. Microsoft PowerPoint - Snippet 1.4 EVMS Stage 2 Surveillance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... standard readable format (e.g., X12, XML format), EVMS monthly reports; EVM variance ... standard readable format (e.g., X12, XML format); risk management plans; the EVM ...

  8. Audit Resolution

    Broader source: Energy.gov (indexed) [DOE]

    ... The following chart represents the variance between prime ... Solutions Hanford 200 87 287 Battelle Memorial Institute PNNL 27 114 141 UT-Battelle ORNL 41 161 202 Bechtel Jacobs ...

  9. kasyanov-98.pdf

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    figure, please see http:www.arm. govdocsdocumentstechnicalconf9803kasyanov- 98.pdf.) F , and variances, ) F ( Var and ) F ( Var , for cases C3D and C2D. As...

  10. davis-99.PDF

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Gaussian solution has 2 and variance 2 L 2 , so its probability density function (PDF) is dP 2 (s)ds (12 L 12 ) x exp-(s2 L ) 2 . (4a) Another solution is the...

  11. Research Highlight

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the measured and derived moments of the SD, with the variance on the fit parameters controlled by the uncertainty in the measured SD and on how well a gamma distribution matches...

  12. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... variance from a reference chimer during runtime and with minimal time-request latency. ... Providing Runtime Clock Synchronization With Minimal Node-to-Node Time Deviation on XT4s ...

  13. Solar Water Heating Requirement for New Residential Construction

    Broader source: Energy.gov [DOE]

    As of January 1, 2010, building permits may not be issued for new single-family homes that do not include a SWH system. The state energy resources coordinator may provide a variance for this...

  14. Detailed Studies of Hydrocarbon Radicals: C2H Dissociation

    SciTech Connect (OSTI)

    Wittig, Curt

    2014-10-06

    A novel experimental technique was examined whose goal was the ejection of radical species into the gas phase from a platform (film) of cold non-reactive material. The underlying principle was one of photo-initiated heat release in a stratum that lies below a layer of CO2 or a layer of amorphous solid water (ASW) and CO2. A molecular precursor to the radical species of interest is deposited near or on the film's surface, where it can be photo-dissociated. It proved unfeasible to avoid the rampant formation of fissures, as opposed to large "flakes." This led to many interesting results, but resulted in our aborting the scheme as a means of launching cold C2H radical into the gas phase. A journal article resulted that is germane to astrophysics but not combustion chemistry.

  15. Fuel injector system

    DOE Patents [OSTI]

    Hsu, Bertrand D. (Erie, PA); Leonard, Gary L. (Schenctady, NY)

    1988-01-01

    A fuel injection system particularly adapted for injecting coal slurry fuels at high pressures includes an accumulator-type fuel injector which utilizes high-pressure pilot fuel as a purging fluid to prevent hard particles in the fuel from impeding the opening and closing movement of a needle valve, and as a hydraulic medium to hold the needle valve in its closed position. A fluid passage in the injector delivers an appropriately small amount of the ignition-aiding pilot fuel to an appropriate region of a chamber in the injector's nozzle so that at the beginning of each injection interval the first stratum of fuel to be discharged consists essentially of pilot fuel and thereafter mostly slurry fuel is injected.

  16. Gate fidelity fluctuations and quantum process invariants

    SciTech Connect (OSTI)

    Magesan, Easwar; Emerson, Joseph [Institute for Quantum Computing and Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Blume-Kohout, Robin [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2011-07-15

    We characterize the quantum gate fidelity in a state-independent manner by giving an explicit expression for its variance. The method we provide can be extended to calculate all higher order moments of the gate fidelity. Using these results, we obtain a simple expression for the variance of a single-qubit system and deduce the asymptotic behavior for large-dimensional quantum systems. Applications of these results to quantum chaos and randomized benchmarking are discussed.

  17. Microsoft PowerPoint - Snippet 6.1 Predictive Analysis 20140630 [Compatibility Mode]

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    various methods to predict future performance. The focus is on the Federal management and oversight perspective of reviewing project performance when schedule and cost performance indices are near 1.0. Alternative indicators may be used to predict future cost or schedule growth when CPI and SPI appear favorable. Predictive analysis involves much more than monthly reviews of the contractor's performance report's schedule and cost variances and variances at completion. It also involves reviewing

  18. ARM - Data Announcements Article

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    7, 2015 [Data Announcements] Doppler Lidar Profile VAPs Now Available Bookmark and Share Top: Seen here are height-time display of wind speed and direction.Bottom: Height-time displays are shown here of vertical velocity variance, skewness, and kurtosis. Top: Seen here are height-time display of wind speed and direction. Bottom: Height-time displays are shown here of vertical velocity variance, skewness, and kurtosis. The ARM Facility currently operates several scanning coherent Doppler lidar

  19. Impact on the steam electric power industry of deleting Section 316(a) of the Clean Water Act: Energy and environmental impacts

    SciTech Connect (OSTI)

    Veil, J.A.; VanKuiken, J.C.; Folga, S.; Gillette, J.L.

    1993-01-01

    Many power plants discharge large volumes of cooling water. In some cases, the temperature of the discharge exceeds state thermal requirements. Section 316(a) of the Clean Water Act (CWA) allows a thermal discharger to demonstrate that less stringent thermal effluent limitations would still protect aquatic life. About 32% of the total steam electric generating capacity in the United States operates under Section 316(a) variances. In 1991, the US Senate proposed legislation that would delete Section 316(a) from the CWA. This study, presented in two companion reports, examines how this legislation would affect the steam electric power industry. This report quantitatively and qualitatively evaluates the energy and environmental impacts of deleting the variance. No evidence exists that Section 316(a) variances have caused any widespread environmental problems. Conversion from once-through cooling to cooling towers would result in a loss of plant output of 14.7-23.7 billion kilowatt-hours. The cost to make up the lost energy is estimated at $12.8-$23.7 billion (in 1992 dollars). Conversion to cooling towers would increase emission of pollutants to the atmosphere and water loss through evaporation. The second report describes alternatives available to plants that currently operate under the variance and estimates the national cost of implementing such alternatives. Little justification has been found for removing the 316(a) variance from the CWA.

  20. System level analysis and control of manufacturing process variation

    DOE Patents [OSTI]

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  1. Variation and correlation of hydrologic properties

    SciTech Connect (OSTI)

    Wang, J.S.Y. [Lawrence Berkeley Lab., CA (United States)

    1991-06-01

    Hydrological properties vary within a given geological formation and even more so among different soil and rock media. The variance of the saturated permeability is shown to be related to the variance of the pore-size distribution index of a given medium by a simple equation. This relationship is deduced by comparison of the data from Yucca Mountain, Nevada (Peters et al., 1984), Las Cruces, New Mexico (Wierenga et al., 1989), and Apache Leap, Arizona (Rasmussen et al., 1990). These and other studies in different soils and rocks also support the Poiseuille-Carmen relationship between the mean value of saturated permeability and the mean value of capillary radius. Correlations of the mean values and variances between permeability and pore-geometry parameters can lead us to better quantification of heterogeneous flow fields and better understanding of the scaling laws of hydrological properties.

  2. Water Vapor Turbulence Profiles in Stationary Continental Convective Mixed Layers

    SciTech Connect (OSTI)

    Turner, D. D.; Wulfmeyer, Volker; Berg, Larry K.; Schween, Jan

    2014-10-08

    The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program’s Raman lidar at the ARM Southern Great Plains (SGP) site in north-central Oklahoma has collected water vapor mixing ratio (q) profile data more than 90% of the time since October 2004. Three hundred (300) cases were identified where the convective boundary layer was quasi-stationary and well-mixed for a 2-hour period, and q mean, variance, third order moment, and skewness profiles were derived from the 10-s, 75-m resolution data. These cases span the entire calendar year, and demonstrate that the q variance profiles at the mixed layer (ML) top changes seasonally, but is more related to the gradient of q across the interfacial layer. The q variance at the top of the ML shows only weak correlations (r < 0.3) with sensible heat flux, Deardorff convective velocity scale, and turbulence kinetic energy measured at the surface. The median q skewness profile is most negative at 0.85 zi, zero at approximately zi, and positive above zi, where zi is the depth of the convective ML. The spread in the q skewness profiles is smallest between 0.95 zi and zi. The q skewness at altitudes between 0.6 zi and 1.2 zi is correlated with the magnitude of the q variance at zi, with increasingly negative values of skewness observed lower down in the ML as the variance at zi increases, suggesting that in cases with larger variance at zi there is deeper penetration of the warm, dry free tropospheric air into the ML.

  3. Estimation of the mixing layer height over a high altitude site in Central Himalayan region by using Doppler lidar

    SciTech Connect (OSTI)

    Shukla, K. K.; Phanikumar, D. V.; Newsom, Rob K.; Kumar, Niranjan; Ratnam, Venkat; Naja, M.; Singh, Narendra

    2014-03-01

    A Doppler lidar was installed at Manora Peak, Nainital (29.4 N; 79.2 E, 1958 amsl) to estimate mixing layer height for the first time by using vertical velocity variance as basic measurement parameter for the period September-November 2011. Mixing layer height is found to be located ~0.57 +/- 0.1and 0.45 +/- 0.05km AGL during day and nighttime, respectively. The estimation of mixing layer height shows good correlation (R>0.8) between different instruments and with different methods. Our results show that wavelet co-variance transform is a robust method for mixing layer height estimation.

  4. SUPERIMPOSED MESH PLOTTING IN MCNP

    SciTech Connect (OSTI)

    J. HENDRICKS

    2001-02-01

    The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.

  5. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    SciTech Connect (OSTI)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  6. Absorption of ethanol, acetone, benzene and 1,2-dichloroethane through human skin in vitro: a test of diffusion model predictions

    SciTech Connect (OSTI)

    Gajjar, Rachna M.; Kasting, Gerald B.

    2014-11-15

    The overall goal of this research was to further develop and improve an existing skin diffusion model by experimentally confirming the predicted absorption rates of topically-applied volatile organic compounds (VOCs) based on their physicochemical properties, the skin surface temperature, and the wind velocity. In vitro human skin permeation of two hydrophilic solvents (acetone and ethanol) and two lipophilic solvents (benzene and 1,2-dichloroethane) was studied in Franz cells placed in a fume hood. Four doses of each {sup 14}C-radiolabed compound were tested 5, 10, 20, and 40 ?L cm{sup ?2}, corresponding to specific doses ranging in mass from 5.0 to 63 mg cm{sup ?2}. The maximum percentage of radiolabel absorbed into the receptor solutions for all test conditions was 0.3%. Although the absolute absorption of each solvent increased with dose, percentage absorption decreased. This decrease was consistent with the concept of a stratum corneum deposition region, which traps small amounts of solvent in the upper skin layers, decreasing the evaporation rate. The diffusion model satisfactorily described the cumulative absorption of ethanol; however, values for the other VOCs were underpredicted in a manner related to their ability to disrupt or solubilize skin lipids. In order to more closely describe the permeation data, significant increases in the stratum corneum/water partition coefficients, K{sub sc}, and modest changes to the diffusion coefficients, D{sub sc}, were required. The analysis provided strong evidence for both skin swelling and barrier disruption by VOCs, even by the minute amounts absorbed under these in vitro test conditions. - Highlights: Human skin absorption of small doses of VOCs was measured in vitro in a fume hood. The VOCs tested were ethanol, acetone, benzene and 1,2-dichloroethane. Fraction of dose absorbed for all compounds at all doses tested was less than 0.3%. The more aggressive VOCs absorbed at higher levels than diffusion model predictions. We conclude that even small exposures to VOCs temporarily alter skin permeability.

  7. Statistical Analysis of Tank 5 Floor Sample Results

    SciTech Connect (OSTI)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.

  8. North-South non-Gaussian asymmetry in Planck CMB maps

    SciTech Connect (OSTI)

    Bernui, A.; Oliveira, A.F.; Pereira, T.S. E-mail: adhimar@unifei.edu.br

    2014-10-01

    We report the results of a statistical analysis performed with the four foreground-cleaned Planck maps by means of a suitably defined local-variance estimator. Our analysis shows a clear dipolar structure in Planck's variance map pointing in the direction (l,b)?(220,-32), thus consistent with the North-South asymmetry phenomenon. Surprisingly, and contrary to previous findings, removing the CMB quadrupole and octopole makes the asymmetry stronger. Our results show a maximal statistical significance, of 98.1% CL, in the scales ranging from ?=4 to ?=500. Additionally, through exhaustive analyses of the four foreground-cleaned and individual frequency Planck maps, we find unlikely that residual foregrounds could be causing this dipole variance asymmetry. Moreover, we find that the dipole gets lower amplitudes for larger masks, evidencing that most of the contribution to the variance dipole comes from a region near the galactic plane. Finally, our results are robust against different foreground cleaning procedures, different Planck masks, pixelization parameters, and the addition of inhomogeneous real noise.

  9. Entropic uncertainty relations in multidimensional position and momentum spaces

    SciTech Connect (OSTI)

    Huang Yichen

    2011-05-15

    Commutator-based entropic uncertainty relations in multidimensional position and momentum spaces are derived, twofold generalizing previous entropic uncertainty relations for one-mode states. They provide optimal lower bounds and imply the multidimensional variance-based uncertainty principle. The article concludes with an open conjecture.

  10. Effect of wettability on scale-up of multiphase flow from core-scale to reservoir fine-grid-scale

    SciTech Connect (OSTI)

    Chang, Y.C.; Mani, V.; Mohanty, K.K.

    1997-08-01

    Typical field simulation grid-blocks are internally heterogeneous. The objective of this work is to study how the wettability of the rock affects its scale-up of multiphase flow properties from core-scale to fine-grid reservoir simulation scale ({approximately} 10{prime} x 10{prime} x 5{prime}). Reservoir models need another level of upscaling to coarse-grid simulation scale, which is not addressed here. Heterogeneity is modeled here as a correlated random field parameterized in terms of its variance and two-point variogram. Variogram models of both finite (spherical) and infinite (fractal) correlation length are included as special cases. Local core-scale porosity, permeability, capillary pressure function, relative permeability functions, and initial water saturation are assumed to be correlated. Water injection is simulated and effective flow properties and flow equations are calculated. For strongly water-wet media, capillarity has a stabilizing/homogenizing effect on multiphase flow. For small variance in permeability, and for small correlation length, effective relative permeability can be described by capillary equilibrium models. At higher variance and moderate correlation length, the average flow can be described by a dynamic relative permeability. As the oil wettability increases, the capillary stabilizing effect decreases and the deviation from this average flow increases. For fractal fields with large variance in permeability, effective relative permeability is not adequate in describing the flow.

  11. Latin-square three-dimensional gage master

    DOE Patents [OSTI]

    Jones, L.

    1981-05-12

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  12. On the relationship among cloud turbulence, droplet formation and drizzle as viewed by Doppler radar, microwave radiometer and lidar

    SciTech Connect (OSTI)

    Feingold, G.; Frisch, A.S.; Cotton, W.R.

    1999-09-01

    Cloud radar, microwave radiometer, and lidar remote sensing data acquired during the Atlantic Stratocumulus Transition Experiment (ASTEX) are analyzed to address the relationship between (1) drop number concentration and cloud turbulence as represented by vertical velocity and vertical velocity variance and (2) drizzle formation and cloud turbulence. Six cases, each of about 12 hours duration, are examined; three of these cases are characteristic of nondrizzling boundary layers and three of drizzling boundary layers. In all cases, microphysical retrievals are only performed when drizzle is negligible (radar reflectivity{lt}{minus}17dBZ). It is shown that for the cases examined, there is, in general, no correlation between drop concentration and cloud base updraft strength, although for two of the nondrizzling cases exhibiting more classical stratocumulus features, these two parameters are correlated. On drizzling days, drop concentration and cloud-base vertical velocity were either not correlated or negatively correlated. There is a significant positive correlation between drop concentration and mean in-cloud vertical velocity variance for both nondrizzling boundary layers (correlation coefficient r=0.45) and boundary layers that have experienced drizzle (r=0.38). In general, there is a high correlation (r{gt}0.5) between radar reflectivity and in-cloud vertical velocity variance, although one of the boundary layers that experienced drizzle exhibited a negative correlation between these parameters. However, in the subcloud region, all boundary layers that experienced drizzle exhibit a negative correlation between radar reflectivity and vertical velocity variance. {copyright} 1999 American Geophysical Union

  13. Stochastic Inversion of Seismic Amplitude-Versus-Angle Data (Stinv-AVA)

    Energy Science and Technology Software Center (OSTI)

    2008-04-03

    The software was developed to invert seismic amplitude-versus-angle (AVA) data using a Bayesian framework. The posterior probability distribution function is sampled by effective Markov chain Monte Carlo (MCMC) methods. The software could provide not only estimates of unknown variables but also varieties of information about uncertainty, such as the mean, mode, median, variance, and even probability density of each unknown.

  14. BPA-2015-00596-FOIA Request

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 R S T 1 Tract - Variance - Corn ment TractADN C TractFromStruct 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26...

  15. Energy dependence of multiplicity fluctuations in heavy ion collisions at 20A to 158A GeV

    SciTech Connect (OSTI)

    Alt, C.; Blume, C.; Bramm, R.; Dinkelaker, P.; Flierl, D.; Kliemant, M.; Kniege, S.; Lungwitz, B.; Mitrovski, M.; Renfordt, R.; Schuster, T.; Stock, R.; Strabel, C.; Stroebele, H.; Utvic, M.; Wetzler, A.; Anticic, T.; Kadija, K.; Nicolic, V.; Susa, T.

    2008-09-15

    Multiplicity fluctuations of positively, negatively, and all charged hadrons in the forward hemisphere were studied in central Pb+Pb collisions at 20A,30A,40A,80A, and 158A GeV. The multiplicity distributions and their scaled variances {omega} are presented as functions of their dependence on collision energy as well as on rapidity and transverse momentum. The distributions have bell-like shapes and their scaled variances are in the range from 0.8 to 1.2 without any significant structure in their energy dependence. No indication of the critical point in fluctuations are observed. The string-hadronic ultrarelativistic quantum molecular dynamics (UrQMD) model significantly overpredicts the mean, but it approximately reproduces the scaled variance of the multiplicity distributions. The predictions of the statistical hadron-resonance gas model obtained within the grand-canonical and canonical ensembles disagree with the measured scaled variances. The narrower than Poissonian multiplicity fluctuations measured in numerous cases may be explained by the impact of conservation laws on fluctuations in relativistic systems.

  16. Gas-storage calculations yield accurate cavern, inventory data

    SciTech Connect (OSTI)

    Mason, R.G. )

    1990-07-02

    This paper discusses how determining gas-storage cavern size and inventory variance is now possible with calculations based on shut-in cavern surveys. The method is the least expensive of three major methods and is quite accurate when recorded over a period of time.

  17. Latin square three dimensional gage master

    DOE Patents [OSTI]

    Jones, Lynn L. (Lexena, KS)

    1982-01-01

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  18. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report January 1–March 31, 2010

    SciTech Connect (OSTI)

    Sisterson, DL

    2010-04-08

    The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 – (ACTUAL/OPSMAX)], which accounts for unplanned downtime.

  19. Portable measurement system for soil resistivity and application to Quaternary clayey sediment

    SciTech Connect (OSTI)

    Nakagawa, Koichi; Morii, Takeo

    1999-07-01

    A simple device to measure electrical resistivity has been developed for field and laboratory use. The measurement system comprises a probe unit, current wave generator, amplified, A/D converter, data acquisition unit with RS-232C interface and notebook personal computer. The system is applicable to soils and soft rocks as long as the probe needles can pierce into them. Frequency range of the measurement system extends from 100 Hz to 10 MHz. The total error of the system is less than 5%. In situ measurements of the resistivity and shear resistance by means of pocket-sized penetrometer were applied to Pleistocene clayey beds. Some laboratory tests were also conducted to examine the interpretation of the in situ resistivity. Marine and non-marine clayey sediments are different in their resistivities of the stratum by in situ test and the clay suspension sampled from the strata. Physical and mechanical properties were compared with the resistivity and general relationships among them were explored to clarify the characteristics of inter-particle bonding. Some possible mechanism regarding the peculiar weathering of clayey sediment or mudstone beds is discussed from the viewpoint of physico-chemical process, which is conspicuous especially near the ground surface.

  20. Conversion of borehole Stoneley waves to channel waves in coal

    SciTech Connect (OSTI)

    Johnson, P.A.; Albright, J.N.

    1987-01-01

    Evidence for the mode conversion of borehole Stoneley waves to stratigraphically guided channel waves was discovered in data from a crosswell acoustic experiment conducted between wells penetrating thin coal strata located near Rifle, Colorado. Traveltime moveout observations show that borehole Stoneley waves, excited by a transmitter positioned at substantial distances in one well above and below a coal stratum at 2025 m depth, underwent partial conversion to a channel wave propagating away from the well through the coal. In an adjacent well the channel wave was detected at receiver locations within the coal, and borehole Stoneley waves, arising from a second partial conversion of channel waves, were detected at locations above and below the coal. The observed channel wave is inferred to be the third-higher Rayleigh mode based on comparison of the measured group velocity with theoretically derived dispersion curves. The identification of the mode conversion between borehole and stratigraphically guided waves is significant because coal penetrated by multiple wells may be detected without placing an acoustic transmitter or receiver within the waveguide. 13 refs., 6 figs., 1 tab.

  1. A low dose simulation tool for CT systems with energy integrating detectors

    SciTech Connect (OSTI)

    Zabic, Stanislav; Morton, Thomas; Brown, Kevin M.; Wang Qiu

    2013-03-15

    Purpose: This paper introduces a new strategy for simulating low-dose computed tomography (CT) scans using real scans of a higher dose as an input. The tool is verified against simulations and real scans and compared to other approaches found in the literature. Methods: The conditional variance identity is used to properly account for the variance of the input high-dose data, and a formula is derived for generating a new Poisson noise realization which has the same mean and variance as the true low-dose data. The authors also derive a formula for the inclusion of real samples of detector noise, properly scaled according to the level of the simulated x-ray signals. Results: The proposed method is shown to match real scans in number of experiments. Noise standard deviation measurements in simulated low-dose reconstructions of a 35 cm water phantom match real scans in a range from 500 to 10 mA with less than 5% error. Mean and variance of individual detector channels are shown to match closely across the detector array. Finally, the visual appearance of noise and streak artifacts is shown to match in real scans even under conditions of photon-starvation (with tube currents as low as 10 and 80 mA). Additionally, the proposed method is shown to be more accurate than previous approaches (1) in achieving the correct mean and variance in reconstructed images from pure-Poisson noise simulations (with no detector noise) under photon-starvation conditions, and (2) in simulating the correct noise level and detector noise artifacts in real low-dose scans. Conclusions: The proposed method can accurately simulate low-dose CT data starting from high-dose data, including effects from photon starvation and detector noise. This is potentially a very useful tool in helping to determine minimum dose requirements for a wide range of clinical protocols and advanced reconstruction algorithms.

  2. Dimensionality and noise in energy selective x-ray imaging

    SciTech Connect (OSTI)

    Alvarez, Robert E.

    2013-11-15

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 10{sup 3}. With the soft tissue component, it is 2.7 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.

  3. Resonant activation in a colored multiplicative thermal noise driven closed system

    SciTech Connect (OSTI)

    Ray, Somrita; Bag, Bidhan Chandra; Mondal, Debasish

    2014-05-28

    In this paper, we have demonstrated that resonant activation (RA) is possible even in a thermodynamically closed system where the particle experiences a random force and a spatio-temporal frictional coefficient from the thermal bath. For this stochastic process, we have observed a hallmark of RA phenomena in terms of a turnover behavior of the barrier-crossing rate as a function of noise correlation time at a fixed noise variance. Variance can be fixed either by changing temperature or damping strength as a function of noise correlation time. Our another observation is that the barrier crossing rate passes through a maximum with increase in coupling strength of the multiplicative noise. If the damping strength is appreciably large, then the maximum may disappear. Finally, we compare simulation results with the analytical calculation. It shows that there is a good agreement between analytical and numerical results.

  4. A simple method to estimate interwell autocorrelation

    SciTech Connect (OSTI)

    Pizarro, J.O.S.; Lake, L.W.

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  5. Sparse matrix transform for fast projection to reduced dimension

    SciTech Connect (OSTI)

    Theiler, James P; Cao, Guangzhi; Bouman, Charles A

    2010-01-01

    We investigate three algorithms that use the sparse matrix transform (SMT) to produce variance-maximizing linear projections to a lower-dimensional space. The SMT expresses the projection as a sequence of Givens rotations and this enables computationally efficient implementation of the projection operator. The baseline algorithm uses the SMT to directly approximate the optimal solution that is given by principal components analysis (PCA). A variant of the baseline begins with a standard SMT solution, but prunes the sequence of Givens rotations to only include those that contribute to the variance maximization. Finally, a simpler and faster third algorithm is introduced; this also estimates the projection operator with a sequence of Givens rotations, but in this case, the rotations are chosen to optimize a criterion that more directly expresses the dimension reduction criterion.

  6. Decision support for operations and maintenance (DSOM) system

    DOE Patents [OSTI]

    Jarrell, Donald B. (Kennewick, WA); Meador, Richard J. (Richland, WA); Sisk, Daniel R. (Richland, WA); Hatley, Darrel D. (Kennewick, WA); Brown, Daryl R. (Richland, WA); Keibel, Gary R. (Richland, WA); Gowri, Krishnan (Richland, WA); Reyes-Spindola, Jorge F. (Richland, WA); Adams, Kevin J. (San Bruno, CA); Yates, Kenneth R. (Lake Oswego, OR); Eschbach, Elizabeth J. (Fort Collins, CO); Stratton, Rex C. (Richland, WA)

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  7. Self-Calibrated Cluster Counts as a Probe of Primordial Non-Gaussianity

    SciTech Connect (OSTI)

    Oguri, Masamune; /KIPAC, Menlo Park

    2009-05-07

    We show that the ability to probe primordial non-Gaussianity with cluster counts is drastically improved by adding the excess variance of counts which contains information on the clustering. The conflicting dependences of changing the mass threshold and including primordial non-Gaussianity on the mass function and biasing indicate that the self-calibrated cluster counts well break the degeneracy between primordial non-Gaussianity and the observable-mass relation. Based on the Fisher matrix analysis, we show that the count variance improves constraints on f{sub NL} by more than an order of magnitude. It exhibits little degeneracy with dark energy equation of state. We forecast that upcoming Hyper Suprime-cam cluster surveys and Dark Energy Survey will constrain primordial non-Gaussianity at the level {sigma}(f{sub NL}) {approx} 8, which is competitive with forecasted constraints from next-generation cosmic microwave background experiments.

  8. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Jones, Terry R.; Koenig, Gregory A.

    2014-04-30

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  9. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Jones, Terry R.; Koenig, Gregory A.

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  10. A Comparative Evaluation of Elasticity in Pentaerythritol tetranitrate using Brillouin Scattering and Resonant Ultrasound Spectroscopy

    SciTech Connect (OSTI)

    Stevens, L.; Hooks, D; Migliori, A

    2010-01-01

    Elastic tensors for organic molecular crystals vary significantly among different measurements. To understand better the origin of these differences, Brillouin scattering and resonant ultrasound spectroscopy measurements were made on the same specimen for single crystal pentaerythritol tetranitrate. The results differ significantly despite mitigation of sample-dependent contributions to errors. The frequency dependence and vibrational modes probed for both measurements are discussed in relation to the observed tensor variance.

  11. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Photon Pathlength Distributions Inferred from the RSS at the ARM SGP Site Min, Q. and Harrison, L.C., ASRC, SUNY at Albany Eleventh Atmospheric Radiation Measurement (ARM) Science Team Meeting A retrieval method of photon pathlength distribution using Rotating Shadowband Spectroradiometer (RSS) measurements in the oxygen A-band and water vapor band is presented. Given the resolution of the new generation RSS, we are able to retrieve both mean and variance of photon pathlength distributions.

  12. 2013 Annual Merit Review Results Report - Project and Program Statistical Calculations Overview

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    -1 12. Project and Program Statistics Calculations Overview A numerical evaluation of each project within each subprogram area and a comparison to the other projects within the subprogram area necessitates a statistical comparison of the projects utilizing specific criteria. For each project, a representative set of experts in the project's field were selected to evaluate the project based upon the criteria indicated in the Introduction. Each evaluation criterion's sample mean and variance were

  13. 2015 Annual Merit Review, Vehicle Technologies Office

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    numerical evaluation of each project within each subprogram area and a comparison to the other projects within the subprogram area necessitates a statistical comparison of the projects utilizing specific criteria. For each project, a representative set of experts in the project's field was selected to evaluate the project based upon the criteria indicated in the Introduction. Each evaluation criterion's sample mean and variance were calculated utilizing the following formulas respectively:

  14. Effect of noise on the standard mapping

    SciTech Connect (OSTI)

    Karney, C.F.F.; Rechester, A.B.; White, R.B.

    1981-03-01

    The effect of a small amount of noise on the standard mapping is considered. Whenever the standard mapping possesses accelerator models (where the action increases approximately linearly with time), the diffusion coefficient contains a term proportional to the reciprocal of the variance of the noise term. At large values of the stochasticity parameter, the accelerator modes exhibit a universal behavior. As a result the dependence of the diffusion coefficient on stochasticity parameter also shows some universal behavior.

  15. Microsoft PowerPoint - Terry EIA [Compatibility Mode]

    Gasoline and Diesel Fuel Update (EIA)

    Biofuel Biofuel Outlook Outlook Terrence Higgins EIA: August 1, 2012 I: Global Overview II: Focus on Americas A. Brazilian Ethanol Supply Outline All rights reserved (2012) B. U.S. Biofuel 1. RFS Requirements 2. Ethanol Limitations 3. Advanced Biofuel Global Outlook Africa: Countries beginning to set mandates Asia Pacific: High variance in blend levels Europe: RED implementation, sustainability and GHG savings North America: RFS2, LCFS, intermediate blends Biofuel Mandates in 2012 Source: Hart

  16. Part II - The effect of data on waste behaviour: The South African waste information system

    SciTech Connect (OSTI)

    Godfrey, Linda; Scott, Dianne; Difford, Mark; Trois, Cristina

    2012-11-15

    Highlights: Black-Right-Pointing-Pointer This empirical study explores the relationship between data and resultant waste knowledge. Black-Right-Pointing-Pointer The study shows that 'Experience, Data and Theory' account for 54.1% of the variance in knowledge. Black-Right-Pointing-Pointer A strategic framework for Municipalities emerged from this study. - Abstract: Combining the process of learning and the theory of planned behaviour into a new theoretical framework provides an opportunity to explore the impact of data on waste behaviour, and consequently on waste management, in South Africa. Fitting the data to the theoretical framework shows that there are only three constructs which have a significant effect on behaviour, viz experience, knowledge, and perceived behavioural control (PBC). Knowledge has a significant influence on all three of the antecedents to behavioural intention (attitude, subjective norm and PBC). However, it is PBC, and not intention, that has the greatest influence on waste behaviour. While respondents may have an intention to act, this intention does not always manifest as actual waste behaviour, suggesting limited volitional control. The theoretical framework accounts for 53.7% of the variance in behaviour, suggesting significant external influences on behaviour not accounted for in the framework. While the theoretical model remains the same, respondents in public and private organisations represent two statistically significant sub-groups in the data set. The theoretical framework accounts for 47.8% of the variance in behaviour of respondents in public waste organisations and 57.6% of the variance in behaviour of respondents in private organisations. The results suggest that respondents in public and private waste organisations are subject to different structural forces that shape knowledge, intention, and resultant waste behaviour.

  17. MEASURING X-RAY VARIABILITY IN FAINT/SPARSELY SAMPLED ACTIVE GALACTIC NUCLEI

    SciTech Connect (OSTI)

    Allevato, V.; Paolillo, M.; Papadakis, I.; Pinto, C.

    2013-07-01

    We study the statistical properties of the normalized excess variance of variability process characterized by a ''red-noise'' power spectral density (PSD), as in the case of active galactic nuclei (AGNs). We perform Monte Carlo simulations of light curves, assuming both a continuous and a sparse sampling pattern and various signal-to-noise ratios (S/Ns). We show that the normalized excess variance is a biased estimate of the variance even in the case of continuously sampled light curves. The bias depends on the PSD slope and on the sampling pattern, but not on the S/N. We provide a simple formula to account for the bias, which yields unbiased estimates with an accuracy better than 15%. We show that the normalized excess variance estimates based on single light curves (especially for sparse sampling and S/N < 3) are highly uncertain (even if corrected for bias) and we propose instead the use of an ''ensemble estimate'', based on multiple light curves of the same object, or on the use of light curves of many objects. These estimates have symmetric distributions, known errors, and can also be corrected for biases. We use our results to estimate the ability to measure the intrinsic source variability in current data, and show that they could also be useful in the planning of the observing strategy of future surveys such as those provided by X-ray missions studying distant and/or faint AGN populations and, more in general, in the estimation of the variability amplitude of sources that will result from future surveys such as Pan-STARRS and LSST.

  18. A Comparison of Image Quality Evaluation Techniques for Transmission X-Ray Microscopy

    SciTech Connect (OSTI)

    Bolgert, Peter J; /Marquette U. /SLAC

    2012-08-31

    Beamline 6-2c at Stanford Synchrotron Radiation Lightsource (SSRL) is capable of Transmission X-ray Microscopy (TXM) at 30 nm resolution. Raw images from the microscope must undergo extensive image processing before publication. Since typical data sets normally contain thousands of images, it is necessary to automate the image processing workflow as much as possible, particularly for the aligning and averaging of similar images. Currently we align images using the 'phase correlation' algorithm, which calculates the relative offset of two images by multiplying them in the frequency domain. For images containing high frequency noise, this algorithm will align noise with noise, resulting in a blurry average. To remedy this we multiply the images by a Gaussian function in the frequency domain, so that the algorithm ignores the high frequency noise while properly aligning the features of interest (FOI). The shape of the Gaussian is manually tuned by the user until the resulting average image is sharpest. To automatically optimize this process, it is necessary for the computer to evaluate the quality of the average image by quantifying its sharpness. In our research we explored two image sharpness metrics, the variance method and the frequency threshold method. The variance method uses the variance of the image as an indicator of sharpness while the frequency threshold method sums up the power in a specific frequency band. These metrics were tested on a variety of test images, containing both real and artificial noise. To apply these sharpness metrics, we designed and built a MATLAB graphical user interface (GUI) called 'Blur Master.' We found that it is possible for blurry images to have a large variance if they contain high amounts of noise. On the other hand, we found the frequency method to be quite reliable, although it is necessary to manually choose suitable limits for the frequency band. Further research must be performed to design an algorithm which automatically selects these parameters.

  19. Measuring skewness of red blood cell deformability distribution by laser ektacytometry

    SciTech Connect (OSTI)

    Nikitin, S Yu; Priezzhev, A V; Lugovtsov, A E; Ustinov, V D

    2014-08-31

    An algorithm is proposed for measuring the parameters of red blood cell deformability distribution based on laser diffractometry of red blood cells in shear flow (ektacytometry). The algorithm is tested on specially prepared samples of rat blood. In these experiments we succeeded in measuring the mean deformability, deformability variance and skewness of red blood cell deformability distribution with errors of 10%, 15% and 35%, respectively. (laser biophotonics)

  20. Display of Hi-Res Data | Princeton Plasma Physics Lab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Display of Hi-Res Data This invention enables plotting a very large number of data points relative to the number of display pixels without losing significant information about the data. A user operating the system can set the threshold for highlighting locations on the plot that exceed a specific variance or range. Highlighted areas can be dynamically explored at the full resolution of the data. No.: M-874 Inventor(s): Eliot A Feibush

  1. Statistical assessment of Monte Carlo distributional tallies

    SciTech Connect (OSTI)

    Kiedrowski, Brian C; Solomon, Clell J

    2010-12-09

    Four tests are developed to assess the statistical reliability of distributional or mesh tallies. To this end, the relative variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality.

  2. Slide 1

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    FFP Subcontracting and Prime EVM Office of Acquisition and Project Management (OAPM) MA-60 U. S. Department of Energy July 2014 Achieving Management and Operational Excellence FFP Subcontractor Effort in the Prime's EVMS and Data * Is the FFP work integrated into Prime's EVMS - Schedules - PMB - Invoices / Payments * Does the CPR / IPMR include - FFP payments without booking lag - FFP subcontractor schedule variance Page 2 Organizing Subcontracted Effort * Work Breakdown Structure *

  3. Slide 1

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Trend Reports Office of Acquisition and Project Management (OAPM) MA-60 U. S. Department of Energy July 2014 Achieving Management and Operational Excellence Analysis Reports - Project Analysis SOP Page 2 PARSII * Analysis Reports - Report use further explained in OAPM's EVMS Project Analysis Standard Operating Procedure (EPASOP) - Trend Analysis Subfolder * Variance Analysis Cumulative (WBS Level) * MR Balance v. SV, VAC, & EAC Trends * Management Reserve (MR) Log * Performance Index Trends

  4. Microsoft PowerPoint - Snippet 1.8 DOE Common EVMS Findings 20140703 [Compatibility Mode]

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    common surveillance findings identified during DOE EVMS Reviews. 1 The purpose of this snippet is to share the most common areas of non-compliance. The preponderance of non-compliances fall into these areas: lack of cost, schedule and scope integration; lack of schedule integrity; inadequate variance analysis; inadequate Estimate at Completion (EAC) implementation; improper use of Management Reserve; and lack of proper control of the baseline. Each of these areas are discussed in more detail in

  5. Microsoft PowerPoint - Snippet 4.9 High Level EVM Expectations 20140711 [Compatibility Mode]

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    focuses on the DOE Federal Project Director's expectations of the contractor's earned value management system and the resultant EVM data. The high-level EVM expectations presented in this Snippet will cover these areas: EVM concepts and objectives, the scheduling and budgeting process, work authorization, level of effort concerns, variance analysis and reporting, evaluation of the contractor's estimate at completion, baseline control and revisions, and a synopsis of expectations. The requirement

  6. ARM - Measurement - Atmospheric turbulence

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    turbulence ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Measurement : Atmospheric turbulence High frequency velocity fluctuations that lead to turbulent transport of momentum, heat, mositure, and passive scalars, and often expressed in terms of variances and covariances. Categories Atmospheric State, Surface Properties Instruments The above measurement is considered scientifically relevant for the following

  7. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  8. Module 6 - Metrics, Performance Measurements and Forecasting | Department

    Energy Savers [EERE]

    of Energy 6 - Metrics, Performance Measurements and Forecasting Module 6 - Metrics, Performance Measurements and Forecasting This module focuses on the metrics and performance measurement tools used in Earned Value. This module reviews metrics such as cost and schedule variance along with cost and schedule performance indices. In addition, this module will outline forecasting tools such as estimate to complete (ETC) and estimate at completion (EAC)

  9. Microsoft PowerPoint - FinalModule6.ppt

    Office of Environmental Management (EM)

    6: Metrics, Performance Measurements and Forecasting Prepared by: Module 6 - Metrics, Performance Measures and Forecasting 2 Prepared by: Booz Allen Hamilton Module 6: Metrics, Performance Measurements and Forecasting Welcome to Module 6. The objective of this module is to introduce you to the Metrics and Performance Measurement tools used, along with Forecasting, in Earned Value Management. The Topics that will be addressed in this Module include: * Define Cost and Schedule Variances * Define

  10. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    SciTech Connect (OSTI)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatoryepistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  11. Microsoft Word - SEC J_Appendix O - Program Mgt and Cost Reports and Government_Negotiated

    National Nuclear Security Administration (NNSA)

    O, Page 1 SECTION J APPENDIX O PROGRAM MANAGEMENT AND COST REPORTS The Contractor shall submit periodic cost, schedule, and technical performance plans and reports in such form and substance as required by the Contracting Officer. Reference Section J, Appendix A, Statement of Work, Chapter I, 4.2. Cost reports will include at a minimum: 1. Monthly general management reports to summarize schedule, labor, and cost plans and status, and provide explanations of status variances from plans. The

  12. Relationship of adiposity to the population distribution of plasma triglyceride concentrations in vigorously active men and women

    SciTech Connect (OSTI)

    Williams, Paul T.

    2002-12-21

    Context and Objective: Vigorous exercise, alcohol and weight loss are all known to increase HDL-cholesterol, however, it is not known whether these interventions raise low HDL as effectively as has been demonstrated for normal HDL. Design: Physician-supplied medical data from 7,288 male and 2,359 female runners were divided into five strata according to their self-reported usual running distance, reported alcohol intake, body mass index (BMI) or waist circumference. Within each stratum, the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles for HDL-cholesterol were then determined. Bootstrap resampling of least-squares regression was applied to determine the cross-sectional relationships between these factors and each percentile of the HDL-cholesterol distribution. Results: In both sexes, the rise in HDL-cholesterol per unit of vigorous exercise or alcohol intake was at least twice as great at the 95th percentile as at the 5th percentile of the HDL-distribution. There was also a significant graded increase in the slopes relating exercise (km run) and alcohol intake to HDL between the 5th and the 95th percentile. Men's HDL-cholesterol decreased in association with fatness (BMI and waist circumference) more sharply at the 95th than at the 5th percentile of the HDL-distribution. Conclusions: Although exercise, alcohol and adiposity were all related to HDL-cholesterol, the elevation in HDL per km run or ounce of alcohol consumed, and reduction in HDL per kg of body weight (men only), was least when HDL was low and greatest when HDL was high. These cross-sectional relationships support the hypothesis that men and women who have low HDL-cholesterol will be less responsive to exercise and alcohol (and weight loss in men) as compared to those who have high HDL-cholesterol.

  13. Teleportation of squeezing: Optimization using non-Gaussian resources

    SciTech Connect (OSTI)

    Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio; Adesso, Gerardo

    2010-12-15

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. De Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.

  14. Entropy vs. energy waveform processing: A comparison based on the heat equation

    SciTech Connect (OSTI)

    Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; Marsh, Jon N.; Wickline, Samuel A.

    2015-05-25

    Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.

  15. Entropy vs. energy waveform processing: A comparison based on the heat equation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; Marsh, Jon N.; Wickline, Samuel A.

    2015-05-25

    Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be definedmore » as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.« less

  16. Lifestyle Factors in U.S. Residential Electricity Consumption

    SciTech Connect (OSTI)

    Sanquist, Thomas F.; Orr, Heather M.; Shui, Bin; Bittner, Alvah C.

    2012-03-30

    A multivariate statistical approach to lifestyle analysis of residential electricity consumption is described and illustrated. Factor analysis of selected variables from the 2005 U.S. Residential Energy Consumption Survey (RECS) identified five lifestyle factors reflecting social and behavioral choices associated with air conditioning, laundry usage, personal computer usage, climate zone of residence, and TV use. These factors were also estimated for 2001 RECS data. Multiple regression analysis using the lifestyle factors yields solutions accounting for approximately 40% of the variance in electricity consumption for both years. By adding the associated household and market characteristics of income, local electricity price and access to natural gas, variance accounted for is increased to approximately 54%. Income contributed only {approx}1% unique variance to the 2005 and 2001 models, indicating that lifestyle factors reflecting social and behavioral choices better account for consumption differences than income. This was not surprising given the 4-fold range of energy use at differing income levels. Geographic segmentation of factor scores is illustrated, and shows distinct clusters of consumption and lifestyle factors, particularly in suburban locations. The implications for tailored policy and planning interventions are discussed in relation to lifestyle issues.

  17. The annual cycle in the tropical Pacific Ocean based on assimilated ocean data from 1983 to 1992

    SciTech Connect (OSTI)

    Smith, T.M.; Chelliah, M.

    1995-06-01

    An analysis of the tropical Pacific Ocean from January 1983 to December 1992 is used to describe the annual cycle, with the main focus on subsurface temperature variations. Some analysis of ocean-current variations are also considered. Monthly mean fields are generated by assimilation of surface and subsurface temperature observations from ships and buoys. Comparisons with observations show that the analysis reasonably describes large-scale ocean thermal variations. Ocean currents are not assimilated and do not compare as well with observations. However, the ocean-current variations in the analysis are qualitatively similar to the known variations given by others. The authors use harmonic analysis to separate the mean annual cycle and estimate its contribution to total variance. The analysis shows that in most regions the annual cycle of subsurface thermal variations is larger than surface variations and that these variations are associated with changes in the depth of the thermocline. The annual cycle accounts for most of the total surface variance poleward of about 10{degrees} latitude but accounts for much less surface and subsurface total variance near the equator. Large subsurface annual cycles occur near 10{degrees}N associated with shifts of the intertropical convergence zone and along the equator associated with the annual cycle of equatorial wind stress. The hemispherically asymmetric depths of the 20{degrees}C isotherms indicate that the large Southern Hemisphere warm pool, which extends to near the equator, may play an important role in thermal variations on the equator. 51 refs., 18 figs., 1 tab.

  18. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect (OSTI)

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  19. Consequences of proposed changes to Clean Water Act thermal discharge requirements

    SciTech Connect (OSTI)

    Veil, J.A.; Moses, D.O.

    1995-12-31

    This paper summarizes three studies that examined the economic and environmental impact on the power industry of (1) limiting thermal mixing zones to 1,000 feet, and (2) eliminating the Clean Water Act (CWA) {section}316(1) variance. Both of these proposed changes were included in S. 1081, a 1991 Senate bill to reauthorize the CWA. The bill would not have provided for grandfathering plants already using the variance or mixing zones larger than 1000 feet. Each of the two changes to the existing thermal discharge requirements were independently evaluated. Power companies were asked what they would do if these two changes were imposed. Most plants affected by the proposed changes would retrofit cooling towers and some would retrofit diffusers. Assuming that all affected plants would proportionally follow the same options as the surveyed plants, the estimated capital cost of retrofitting cooling towers or diffusers at all affected plants ranges from $21.4 to 24.4 billion. Both cooling towers and diffusers exert a 1%-5.8% energy penalty on a plant`s output. Consequently, the power companies must generate additional power if they install those technologies. The estimated cost of the additional power ranges from $10 to 18.4 billion over 20 years. Generation of the extra power would emit over 8 million tons per year of additional carbon dioxide. Operation of the new cooling towers would cause more than 1.5 million gallons per minute of additional evaporation. Neither the restricted mixing zone size nor the elimination of the {section}316(1) variance was adopted into law. More recent proposed changes to the Clean Water Act have not included either of these provisions, but in the future, other Congresses might attempt to reintroduce these types of changes.

  20. Demonstration of Data Center Energy Use Prediction Software

    SciTech Connect (OSTI)

    Coles, Henry; Greenberg, Steve; Tschudi, William

    2013-09-30

    This report documents a demonstration of a software modeling tool from Romonet that was used to predict energy use and forecast energy use improvements in an operating data center. The demonstration was conducted in a conventional data center with a 15,500 square foot raised floor and an IT equipment load of 332 kilowatts. It was cooled using traditional computer room air handlers and a compressor-based chilled water system. The data center also utilized an uninterruptible power supply system for power conditioning and backup. Electrical energy monitoring was available at a number of locations within the data center. The software modeling tool predicted the energy use of the data center?s cooling and electrical power distribution systems, as well as electrical energy use and heat removal for the site. The actual energy used by the computer equipment was recorded from power distribution devices located at each computer equipment row. The model simulated the total energy use in the data center and supporting infrastructure and predicted energy use at energy-consuming points throughout the power distribution system. The initial predicted power levels were compared to actual meter readings and were found to be within approximately 10 percent at a particular measurement point, resulting in a site overall variance of 4.7 percent. Some variances were investigated, and more accurate information was entered into the model. In this case the overall variance was reduced to approximately 1.2 percent. The model was then used to predict energy use for various modification opportunities to the data center in successive iterations. These included increasing the IT equipment load, adding computer room air handler fan speed controls, and adding a water-side economizer. The demonstration showed that the software can be used to simulate data center energy use and create a model that is useful for investigating energy efficiency design changes.

  1. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; et al

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  2. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    SciTech Connect (OSTI)

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; Johannesson, G.; Klein, Stephen A.; Lucas, Donald; Neale, Richard; Rasch, Philip J.; Swiler, Laura P.; Tannahill, John; Wang, Hailong; Wang, Minghuai; Zhao, Chun

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.

  3. U.S. DEPARTMENT OF ENERGY

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2# SUMMARY REPORT FORM APPROVED (11-84) OMB NO. 1910-1400 1. IDENTIFICATION NUMBER 2. PROGRAM/ PROJECT TITLE 3. REPORTING PERIOD 4. PARTICIPANT NAME AND ADDRESS 5. START DATE 6. COMPLETION DATE 7. FY 8. MONTHS 9. COST STATUS a. $ Expressed in: b. Budget and Reporting No. c. Cost Plan Date d. Actual Costs Prior Years e. Planned Costs Prior Years f. Total Estimated Cost for Contract g. Total Contract Value h. Estimated Subsequent Reporting Period Accrued Costs g. Planned h. Actual i. Variance j.

  4. Transport Test Problems for Hybrid Methods Development

    SciTech Connect (OSTI)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  5. Analysis of turbulent transport and mixing in transitional Rayleigh/Taylor unstable flow using direct numerical simulation data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schilling, Oleg; Mueschke, Nicholas J.

    2010-10-18

    Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmoreand destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. These results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.less

  6. Convergence of Legendre Expansion of Doppler-Broadened Double Differential Elastic Scattering Cross Section

    SciTech Connect (OSTI)

    Arbanas, Goran; Dunn, Michael E; Larson, Nancy M; Leal, Luiz C; Williams, Mark L

    2012-01-01

    Convergence properties of Legendre expansion of a Doppler-broadened double-differential elastic neutron scattering cross section of {sup 238}U near the 6.67 eV resonance at temperature 10{sup 3} K are studied. A variance of Legendre expansion from a reference Monte Carlo computation is used as a measure of convergence and is computed for as many as 15 terms in the Legendre expansion. When the outgoing energy equals the incoming energy, it is found that the Legendre expansion converges very slowly. Therefore, a supplementary method of computing many higher-order terms is suggested and employed for this special case.

  7. Safety criteria for organic watch list tanks at the Hanford Site

    SciTech Connect (OSTI)

    Meacham, J.E., Westinghouse Hanford

    1996-08-01

    This document reviews the hazards associated with the storage of organic complexant salts in Hanford Site high-level waste single- shell tanks. The results of this analysis were used to categorize tank wastes as safe, unconditionally safe, or unsafe. Sufficient data were available to categorize 67 tanks; 63 tanks were categorized as safe, and four tanks were categorized as conditionally safe. No tanks were categorized as unsafe. The remaining 82 SSTs lack sufficient data to be categorized.Historic tank data and an analysis of variance model were used to prioritize the remaining tanks for characterization.

  8. Element Agglomeration Algebraic Multilevel Monte-Carlo Library

    Energy Science and Technology Software Center (OSTI)

    2015-02-19

    ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less

  9. In-Situ Real Time Monitoring and Control of Mold Making and Filling Processes: Final Report

    SciTech Connect (OSTI)

    Mohamed Abdelrahman; Kenneth Currie

    2010-12-22

    This project presents a model for addressing several objectives envisioned by the metal casting industries through the integration of research and educational components. It provides an innovative approach to introduce technologies for real time characterization of sand molds, lost foam patterns and monitoring of the mold filling process. The technology developed will enable better control over the casting process. It is expected to reduce scrap and variance in the casting quality. A strong educational component is integrated into the research plan to utilize increased awareness of the industry professional, the potential benefits of the developed technology, and the potential benefits of cross cutting technologies.

  10. Modelling of volatility in monetary transmission mechanism

    SciTech Connect (OSTI)

    Dobešová, Anna; Klepáč, Václav; Kolman, Pavel; Bednářová, Petra

    2015-03-10

    The aim of this paper is to compare different approaches to modeling of volatility in monetary transmission mechanism. For this purpose we built time-varying parameter VAR (TVP-VAR) model with stochastic volatility and VAR-DCC-GARCH model with conditional variance. The data from three European countries are included in the analysis: the Czech Republic, Germany and Slovakia. Results show that VAR-DCC-GARCH system captures higher volatility of observed variables but main trends and detected breaks are generally identical in both approaches.

  11. Methods for recalibration of mass spectrometry data

    DOE Patents [OSTI]

    Tolmachev, Aleksey V. (Richland, WA); Smith, Richard D. (Richland, WA)

    2009-03-03

    Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.

  12. Energy-selective optical excitation and detection in InAs/InP quantum dot ensembles using a one-dimensional optical microcavity

    SciTech Connect (OSTI)

    Gamouras, A.; Britton, M.; Khairy, M. M.; Mathew, R.; Hall, K. C.; Dalacu, D.; Poole, P.; Poitras, D.; Williams, R. L.

    2013-12-16

    We demonstrate the selective optical excitation and detection of subsets of quantum dots (QDs) within an InAs/InP ensemble using a SiO{sub 2}/Ta{sub 2}O{sub 5}-based optical microcavity. The low variance of the exciton transition energy and dipole moment tied to the narrow linewidth of the microcavity mode is expected to facilitate effective qubit encoding and manipulation in a quantum dot ensemble with ease of quantum state readout relative to qubits encoded in single quantum dots.

  13. DOE F 1332.8

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LABOR MANAGEMENT REPORT Page of DOE F 1332.8# FORM APPROVED (11-84) OMB NO. 1910-1400 1. TITLE 2. REPORTING PERIOD 3. IDENTIFICATION NUMBER 4. PARTICIPANT NAME AND ADDRESS 5. LABOR PLAN DATE 6. START DATE 7. COMPLETION DATE 8. ELEMENT 9. REPORTING ELEMENT 10. LABOR EXPENDED 11. ESTIMATED LABOR EXPENDITURES 12. 13. CODE Total Contract Variance Labor Reporting Period Cumulative to Date Balance c. e. a. Subse- quent Reporting Period Total of Fiscal Year (1) (2) (3) d. Subse- quent Fiscal Years to

  14. DOE F 1332.9

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    COST MANAGEMENT REPORT Page of DOE F 1332.9# FORM APPROVED (11-84) OMB NO. 1910-1400 1. TITLE 2. REPORTING PERIOD 3. IDENTIFICATION NUMBER 4. PARTICIPANT NAME AND ADDRESS 5. COST PLAN DATE 6. START DATE 7. COMPLETION DATE 8. ELEMENT 9. REPORTING ELEMENT 10. ACCRUED COSTS 11. ESTIMATED ACCRUED COSTS 12. 13. CODE Total Contract Variance Labor Reporting Period Cumulative to Date Balance c. d. Fiscal e. a. Subse- quent Reporting Period Total of Fiscal Year (1) (2) (3) Years to Completion a. Actual

  15. ARM - VAP Product - dlprofwstats4news

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Productsdlprofdlprofwstats4news Documentation Technical Report Data Management Facility Plots (Quick Looks) ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send VAP Output : DLPROFWSTATS4NEWS Doppler lidar vertical velocity VAP Active Dates 2010.10.22 - 2016.03.09 Originating VAP Process Doppler Lidar Profiles : DLPROF Description Height-time displays are shown here of vertical velocity variance (a), skewness (b), and

  16. Machine protection system for rotating equipment and method

    DOE Patents [OSTI]

    Lakshminarasimha, Arkalgud N. (Marietta, GA); Rucigay, Richard J. (Marietta, GA); Ozgur, Dincer (Kennesaw, GA)

    2003-01-01

    A machine protection system and method for rotating equipment introduces new alarming features and makes use of full proximity probe sensor information, including amplitude and phase. Baseline vibration amplitude and phase data is estimated and tracked according to operating modes of the rotating equipment. Baseline vibration and phase data can be determined using a rolling average and variance and stored in a unit circle or tracked using short term average and long term average baselines. The sensed vibration amplitude and phase is compared with the baseline vibration amplitude and phase data. Operation of the rotating equipment can be controlled based on the vibration amplitude and phase.

  17. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect (OSTI)

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  18. Initial Evidence for Self-Organized Criticality in Electric Power System Blackouts

    SciTech Connect (OSTI)

    Carreras, B.A.; Dobson, I.; Newman, D.E.; Poole, A.B.

    2000-01-04

    We examine correlations in a time series of electric power system blackout sizes using scaled window variance analysis and R/S statistics. The data shows some evidence of long time correlations and has Hurst exponent near 0.7. Large blackouts tend to correlate with further large blackouts after a long time interval. Similar effects are also observed in many other complex systems exhibiting self-organized criticality. We discuss this initial evidence and possible explanations for self-organized criticality in power systems blackouts. Self-organized criticality, if fully confirmed in power systems, would suggest new approaches to understanding and possibly controlling blackouts.

  19. Quality Work Plan Checklist and Resources - Section 1

    Energy Savers [EERE]

    Quality Work Plan Checklist and Resources - Section 1 State staff can use this list of questions and related resources to help implement the WAP Quality Work Plan. Each question includes reference to where in 15-4 the guidance behind the question is found, and where in the 2015 Application Package you will describe the answers to DOE. App Section 15-4 Section Question Yes No Resources V.5.1 1 Are you on track to submit current field guides and standards, including any necessary variance

  20. Studying methane migration mechanisms at Walker Ridge, Gulf of Mexico, via 3D methane hydrate reservoir modeling

    SciTech Connect (OSTI)

    Nole, Michael; Daigle, Hugh; Mohanty, Kishore; Cook, Ann; Hillman, Jess

    2015-12-15

    We have developed a 3D methane hydrate reservoir simulator to model marine methane hydrate systems. Our simulator couples highly nonlinear heat and mass transport equations and includes heterogeneous sedimentation, in-situ microbial methanogenesis, the influence of pore size contrast on solubility gradients, and the impact of salt exclusion from the hydrate phase on dissolved methane equilibrium in pore water. Using environmental parameters from Walker Ridge in the Gulf of Mexico, we first simulate hydrate formation in and around a thin, dipping, planar sand stratum surrounded by clay lithology as it is buried to 295mbsf. We find that with sufficient methane being supplied by organic methanogenesis in the clays, a 200x pore size contrast between clays and sands allows for a strong enough concentration gradient to significantly drop the concentration of methane hydrate in clays immediately surrounding a thin sand layer, a phenomenon that is observed in well log data. Building upon previous work, our simulations account for the increase in sand-clay solubility contrast with depth from about 1.6% near the top of the sediment column to 8.6% at depth, which leads to a progressive strengthening of the diffusive flux of methane with time. By including an exponentially decaying organic methanogenesis input to the clay lithology with depth, we see a decrease in the aqueous methane supplied to the clays surrounding the sand layer with time, which works to further enhance the contrast in hydrate saturation between the sand and surrounding clays. Significant diffusive methane transport is observed in a clay interval of about 11m above the sand layer and about 4m below it, which matches well log observations. The clay-sand pore size contrast alone is not enough to completely eliminate hydrate (as observed in logs), because the diffusive flux of aqueous methane due to a contrast in pore size occurs slower than the rate at which methane is supplied via organic methanogenesis. Therefore, it is likely that additional mechanisms are at play, notably bound water activity reduction in clays. Three-dimensionality allows for inclusion of lithologic heterogeneities, which focus fluid flow and subsequently allow for heterogeneity in the methane migration mechanisms that dominate in marine sediments at a local scale. Incorporating recently acquired 3D seismic data from Walker Ridge to inform the lithologic structure of our modeled reservoir, we show that even with deep adjective sourcing of methane along highly permeable pathways, local hydrate accumulations can be sourced either by diffusive or advective methane flux; advectively-sourced hydrates accumulate evenly in highly permeable strata, while diffusively-sourced hydrates are characterized by thin strata-bound intervals with high clay-sand pore size contrasts.

  1. Statistical techniques for characterizing residual waste in single-shell and double-shell tanks

    SciTech Connect (OSTI)

    Jensen, L., Fluor Daniel Hanford

    1997-02-13

    A primary objective of the Hanford Tank Initiative (HTI) project is to develop methods to estimate the inventory of residual waste in single-shell and double-shell tanks. A second objective is to develop methods to determine the boundaries of waste that may be in the waste plume in the vadose zone. This document presents statistical sampling plans that can be used to estimate the inventory of analytes within the residual waste within a tank. Sampling plans for estimating the inventory of analytes within the waste plume in the vadose zone are also presented. Inventory estimates can be used to classify the residual waste with respect to chemical and radiological hazards. Based on these estimates, it will be possible to make decisions regarding the final disposition of the residual waste. Four sampling plans for the residual waste in a tank are presented. The first plan is based on the assumption that, based on some physical characteristic, the residual waste can be divided into disjoint strata, and waste samples obtained from randomly selected locations within each stratum. The second plan is that waste samples are obtained from randomly selected locations within the waste. The third and fourth plans are similar to the first two, except that composite samples are formed from multiple samples. Common to the four plans is that, in the laboratory, replicate analytical measurements are obtained from homogenized waste samples. The statistical sampling plans for the residual waste are similar to the statistical sampling plans developed for the tank waste characterization program. In that program, the statistical sampling plans required multiple core samples of waste, and replicate analytical measurements from homogenized core segments. A statistical analysis of the analytical data, obtained from use of the statistical sampling plans developed for the characterization program or from the HTI project, provide estimates of mean analyte concentrations and confidence intervals on the mean. In addition, the statistical analysis provides estimates of spatial and measurement variabilities. The magnitude of these sources of variability are used to determine how well the inventory of the analytes in the waste have been estimated. This document provides statistical sampling plans that can be used to estimate the inventory of the analytes in the residual waste in single-shell and double-shell tanks and in the waste plume in the vadose zone.

  2. Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers

    SciTech Connect (OSTI)

    Ghosh, Somnath; Friedrich, Rainer

    2015-05-15

    We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. The filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.

  3. Early-warning process/control for anaerobic digestion and biological nitrogen transformation processes: Batch, semi-continuous, and/or chemostat experiments. Final report

    SciTech Connect (OSTI)

    Hickey, R.

    1992-09-01

    The objective of this project was to develop and test an early-warning/process control model for anaerobic sludge digestion (AD). The approach was to use batch and semi-continuously fed systems and to assemble system parameter data on a real-time basis. Specific goals were to produce a real-time early warning control model and computer code, tested for internal and external validity; to determine the minimum rate of data collection for maximum lag time to predict failure with a prescribed accuracy and confidence in the prediction; and to determine and characterize any trends in the real-time data collected in response to particular perturbations to feedstock quality. Trends in the response of trace gases carbon monoxide and hydrogen in batch experiments, were found to depend on toxicant type. For example, these trace gases respond differently for organic substances vs. heavy metals. In both batch and semi-continuously feed experiments, increased organic loading lead to proportionate increases in gas production rates as well as increases in CO and H{sub 2} concentration. An analysis of variance of gas parameters confirmed that CO was the most sensitive indicator variable by virtue of its relatively larger variance compared to the others. The other parameters evaluated including gas production, methane production, hydrogen, carbon monoxide, carbon dioxide and methane concentration. In addition, a relationship was hypothesized between gaseous CO concentration and acetate concentrations in the digester. The data from semicontinuous feed experiments were supportive.

  4. Comfort and HVAC Performance for a New Construction Occupied Test House in Roseville, California

    SciTech Connect (OSTI)

    Burdick, A.

    2013-10-01

    K. Hovnanian(R) Homes(R) constructed a 2,253-ft2 single-story slab-on-grade ranch house for an occupied test house (new construction) in Roseville, California. One year of monitoring and analysis focused on the effectiveness of the space conditioning system at maintaining acceptable temperature and relative humidity levels in several rooms of the home, as well as room-to-room differences and the actual measured energy consumption by the space conditioning system. In this home, the air handler unit (AHU) and ducts were relocated to inside the thermal boundary. The AHU was relocated from the attic to a mechanical closet, and the ductwork was located inside an insulated and air-sealed bulkhead in the attic. To describe the performance and comfort in the home, the research team selected representative design days and extreme days from the annual data for analysis. To ensure that temperature differences were within reasonable occupant expectations, the team followed Air Conditioning Contractors of America guidance. At the end of the monitoring period, the occupant of the home had no comfort complaints in the home. Any variance between the modeled heating and cooling energy and the actual amounts used can be attributed to the variance in temperatures at the thermostat versus the modeled inputs.

  5. Comfort and HVAC Performance for a New Construction Occupied Test House in Roseville, California

    SciTech Connect (OSTI)

    Burdick, A.

    2013-10-01

    K. Hovnanian Homes constructed a 2,253-ft2 single-story slab-on-grade ranch house for an occupied test house (new construction) in Roseville, California. One year of monitoring and analysis focused on the effectiveness of the space conditioning system at maintaining acceptable temperature and relative humidity levels in several rooms of the home, as well as room-to-room differences and the actual measured energy consumption by the space conditioning system. In this home, the air handler unit (AHU) and ducts were relocated to inside the thermal boundary. The AHU was relocated from the attic to a mechanical closet, and the ductwork was located inside an insulated and air-sealed bulkhead in the attic. To describe the performance and comfort in the home, the research team selected representative design days and extreme days from the annual data for analysis. To ensure that temperature differences were within reasonable occupant expectations, the team followed Air Conditioning Contractors of America guidance. At the end of the monitoring period, the occupant of the home had no comfort complaints in the home. Any variance between the modeled heating and cooling energy and the actual amounts used can be attributed to the variance in temperatures at the thermostat versus the modeled inputs.

  6. Verification of theoretically computed spectra for a point rotating in a vertical plane

    SciTech Connect (OSTI)

    Powell, D.C.; Connell, J.R.; George, R.L.

    1985-03-01

    A theoretical model is modified and tested that produces the power spectrum of the alongwind component of turbulence as experienced by a point rotating in a vertical plane perpendicular to the mean wind direction. The ability to generate such a power spectrum, independent of measurement, is important in wind turbine design and testing. The radius of the circle of rotation, its height above the ground, and the rate of rotation are typical for those for a MOD-OA wind turbine. Verification of this model is attempted by comparing two sets of variances that correspond to individual harmonic bands of spectra of turbulence in the rotational frame. One set of variances is calculated by integrating the theoretically generated rotational spectra; the other is calculated by integrating rotational spectra from real data analysis. The theoretical spectrum is generated by Fourier transformation of an autocorrelation function taken from von Karman and modified for the rotational frame. The autocorrelation is based on dimensionless parameters, each of which incorporates both atmospheric and wind turbine parameters. The real data time series are formed by sampling around the circle of anemometers of the Vertical Plane Array at the former MOD-OA site at Clayton, New Mexico.

  7. Fractional frequency instability in the 10{sup -14} range with a thermal beam optical frequency reference

    SciTech Connect (OSTI)

    McFerran, John J.; Luiten, Andre N. [School of Physics, University of Western Australia, 35 Stirling Highway, Crawley 6009, W.A. (Australia)

    2010-02-15

    We demonstrate a means of increasing the signal-to-noise ratio in a Ramsey-Borde interferometer with spatially separated oscillatory fields on a thermal atomic beam. The {sup 1}S{sub 0}{r_reversible}{sup 3}P{sub 1} intercombination line in neutral {sup 40}Ca is used as a frequency discriminator, with an extended cavity diode laser at 423 nm probing the ground state population after a Ramsey-Borde sequence of 657 nm light-field interactions with the atoms. Evaluation of the instability of the Ca frequency reference is carried out by comparison with (i) a hydrogen-maser and (ii) a cryogenic sapphire oscillator. In the latter case the Ca reference exhibits a square-root {Lambda} variance of 9.2x10{sup -14} at 1 s and 2.0x10{sup -14} at 64 s. This is an order-of-magnitude improvement for optical beam frequency references, to our knowledge. The shot noise of the readout fluorescence produces a limiting square-root {Lambda} variance of 7x10{sup -14}/{radical}({tau}), highlighting the potential for improvement. This work demonstrates the feasibility of a portable frequency reference in the optical domain with 10{sup -14} range frequency instability.

  8. Method and system for turbomachinery surge detection

    DOE Patents [OSTI]

    Faymon, David K.; Mays, Darrell C.; Xiong, Yufei

    2004-11-23

    A method and system for surge detection within a gas turbine engine, comprises: measuring the compressor discharge pressure (CDP) of the gas turbine over a period of time; determining a time derivative (CDP.sub.D ) of the measured (CDP) correcting the CDP.sub.D for altitude, (CDP.sub.DCOR); estimating a short-term average of CDP.sub.DCOR.sup.2 ; estimating a short-term average of CDP.sub.DCOR ; and determining a short-term variance of corrected CDP rate of change (CDP.sub.roc) based upon the short-term average of CDP.sub.DCOR and the short-term average of CDP.sub.DCOR.sup.2. The method and system then compares the short-term variance of corrected CDP rate of change with a pre-determined threshold (CDP.sub.proc) and signals an output when CDP.sub.roc >CDP.sub.proc. The method and system provides a signal of a surge within the gas turbine engine when CDP.sub.roc remains>CDP.sub.proc for pre-determined period of time.

  9. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    SciTech Connect (OSTI)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).

  10. Time-variability of NO{sub x} emissions from Portland cement kilns

    SciTech Connect (OSTI)

    Walters, L.J. Jr.; May, M.S. III [PSM International, Dallas, TX (United States)] [PSM International, Dallas, TX (United States); Johnson, D.E. [Kansas State Univ., Manhattan, KS (United States). Dept. of Statistics] [Kansas State Univ., Manhattan, KS (United States). Dept. of Statistics; MacMann, R.S. [Penta Engineering, St. Louis, MO (United States)] [Penta Engineering, St. Louis, MO (United States); Woodward, W.A. [Southern Methodist Univ., Dallas, TX (United States). Dept. of Statistics] [Southern Methodist Univ., Dallas, TX (United States). Dept. of Statistics

    1999-03-01

    Due to the presence of autocorrelation between sequentially measured nitrogen oxide (NO{sub x}) concentrations in stack gas from portland cement kilns, the determination of the average emission rates and the uncertainty of the average has been improperly calculated by the industry and regulatory agencies. Documentation of permit compliance, establishment of permit levels, and the development and testing of control techniques for reducing NO{sub x} emissions at specific cement plants requires accurate and precise statistical estimates of parameters such as means, standard deviations, and variances. Usual statistical formulas such as for the variance of the sample mean only apply if sequential measurements of NO{sub x} emissions are independent. Significant autocorrelation of NO{sub x} emission measurements revealed that NO{sub x} concentration values measured by continuous emission monitors are not independent but can be represented by an autoregressive, moving average time series. Three orders of time-variability of NO{sub x} emission rates were determined from examination of continuous emission measurements from several cement kilns.

  11. Sources of Technical Variability in Quantitative LC-MS Proteomics: Human Brain Tissue Sample Analysis.

    SciTech Connect (OSTI)

    Piehowski, Paul D.; Petyuk, Vladislav A.; Orton, Daniel J.; Xie, Fang; Moore, Ronald J.; Ramirez Restrepo, Manuel; Engel, Anzhelika; Lieberman, Andrew P.; Albin, Roger L.; Camp, David G.; Smith, Richard D.; Myers, Amanda J.

    2013-05-03

    To design a robust quantitative proteomics study, an understanding of both the inherent heterogeneity of the biological samples being studied as well as the technical variability of the proteomics methods and platform is needed. Additionally, accurately identifying the technical steps associated with the largest variability would provide valuable information for the improvement and design of future processing pipelines. We present an experimental strategy that allows for a detailed examination of the variability of the quantitative LC-MS proteomics measurements. By replicating analyses at different stages of processing, various technical components can be estimated and their individual contribution to technical variability can be dissected. This design can be easily adapted to other quantitative proteomics pipelines. Herein, we applied this methodology to our label-free workflow for the processing of human brain tissue. For this application, the pipeline was divided into four critical components: Tissue dissection and homogenization (extraction), protein denaturation followed by trypsin digestion and SPE clean-up (digestion), short-term run-to-run instrumental response fluctuation (instrumental variance), and long-term drift of the quantitative response of the LC-MS/MS platform over the 2 week period of continuous analysis (instrumental stability). From this analysis, we found the following contributions to variability: extraction (72%) >> instrumental variance (16%) > instrumental stability (8.4%) > digestion (3.1%). Furthermore, the stability of the platform and its suitability for discovery proteomics studies is demonstrated.

  12. Numerical solution of the Stratonovich- and ItoEuler equations: Application to the stochastic piston problem

    SciTech Connect (OSTI)

    Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em

    2013-03-01

    We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a deterministic part and a stochastic part. Numerical results verify the StratonovichEuler and ItoEuler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.

  13. A model system for QTL analysis: Effects of alcohol dehydrogenase genotype on alcohol pharmacokinetics

    SciTech Connect (OSTI)

    Martin, N.G.; Nightingale, B.; Whitfield, J.B.

    1994-09-01

    There is much interest in the detection of quantitative trait loci (QTL) - major genes which affect quantitative phenotypes. The relationship of polymorphism at known alcohol metabolizing enzyme loci to alcohol pharmacokinetics is a good model system. The three class I alcohol dehydrogenase genes are clustered on chromosome 4 and protein electrophoresis has revealed polymorphisms at the ADH2 and ADH3 loci. While different activities of the isozymes have been demonstrated in vitro, little work has been done in trying to relate ADH polymorphism to variation in ethanol metabolism in vivo. We previously measured ethanol metabolism and psychomotor reactivity in 206 twin pairs and demonstrated that most of the repeatable variation was genetic. We have now recontacted the twins to obtain DNA samples and used PCR with allele specific primers to type the ADH2 and ADH3 polymorphisms in 337 individual twins. FISHER has been used to estimate fixed effects of typed polymorphisms simultaneously with remaining linked and unlinked genetic variance. The ADH2*1-2 genotypes metabolize ethanol faster and attain a lower peak blood alcohol concentration than the more common ADH2*1-1 genotypes, although less than 3% of the variance is accounted for. There is no effect of ADH3 genotype. However, sib-pair linkage analysis suggests that there is a linked polymorphism which has a much greater effect on alcohol metabolism that those typed here.

  14. URBAN WOOD/COAL CO-FIRING IN THE BELLEFIELD BOILERPLANT

    SciTech Connect (OSTI)

    James T. Cobb, Jr.; Gene E. Geiger; William W. Elder III; William P. Barry; Jun Wang; Hongming Li

    2001-08-21

    During the third quarter, important preparatory work was continued so that the experimental activities can begin early in the fourth quarter. Authorization was awaited in response to the letter that was submitted to the Allegheny County Health Department (ACHD) seeking an R&D variance for the air permit at the Bellefield Boiler Plant (BBP). Verbal authorizations were received from the Pennsylvania Department of Environmental Protection (PADEP) for R&D variances for solid waste permits at the J. A. Rutter Company (JARC), and Emery Tree Service (ETS). Construction wood was acquired from Thompson Properties and Seven D Corporation. Forty tons of pallet and construction wood were ground to produce BioGrind Wood Chips at JARC and delivered to Mon Valley Transportation Company (MVTC). Five tons of construction wood were milled at ETS and half of the product delivered to MVTC. Discussions were held with BBP and Energy Systems Associates (ESA) about the test program. Material and energy balances on Boiler No.1 and a plan for data collection were prepared. Presentations describing the University of Pittsburgh Wood/Coal Co-Firing Program were provided to the Pittsburgh Chapter of the Pennsylvania Society of Professional Engineers, and the Upgraded Coal Interest Group and the Biomass Interest Group (BIG) of the Electric Power Research Institute (EPRI). An article describing the program appeared in the Pittsburgh Post-Gazette. An application was submitted for authorization for a Pennsylvania Switchgrass Energy and Conservation Program.

  15. Optimal Solar PV Arrays Integration for Distributed Generation

    SciTech Connect (OSTI)

    Omitaomu, Olufemi A; Li, Xueping

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.

  16. Climate Change Projections of the North American Regional Climate Change Assessment Program (NARCCAP)

    SciTech Connect (OSTI)

    Mearns, L. O.; Sain, Steve; Leung, Lai-Yung R.; Bukovsky, M. S.; McGinnis, Seth; Biner, S.; Caya, Daniel; Arritt, R.; Gutowski, William; Takle, Eugene S.; Snyder, Mark A.; Jones, Richard; Nunes, A M B.; Tucker, S.; Herzmann, D.; McDaniel, Larry; Sloan, Lisa

    2013-10-01

    We investigate major results of the NARCCAP multiple regional climate model (RCM) experiments driven by multiple global climate models (GCMs) regarding climate change for seasonal temperature and precipitation over North America. We focus on two major questions: How do the RCM simulated climate changes differ from those of the parent GCMs and thus affect our perception of climate change over North America, and how important are the relative contributions of RCMs and GCMs to the uncertainty (variance explained) for different seasons and variables? The RCMs tend to produce stronger climate changes for precipitation: larger increases in the northern part of the domain in winter and greater decreases across a swath of the central part in summer, compared to the four GCMs driving the regional models as well as to the full set of CMIP3 GCM results. We pose some possible process-level mechanisms for the difference in intensity of change, particularly for summer. Detailed process-level studies will be necessary to establish mechanisms and credibility of these results. The GCMs explain more variance for winter temperature and the RCMs for summer temperature. The same is true for precipitation patterns. Thus, we recommend that future RCM-GCM experiments over this region include a balanced number of GCMs and RCMs.

  17. Inflationary power asymmetry from primordial domain walls

    SciTech Connect (OSTI)

    Jazayeri, Sadra; Akrami, Yashar; Firouzjahi, Hassan; Solomon, Adam R.; Wang, Yi E-mail: yashar.akrami@astro.uio.no E-mail: a.r.solomon@damtp.cam.ac.uk

    2014-11-01

    We study the asymmetric primordial fluctuations in a model of inflation in which translational invariance is broken by a domain wall. We calculate the corrections to the power spectrum of curvature perturbations; they are anisotropic and contain dipole, quadrupole, and higher multipoles with non-trivial scale-dependent amplitudes. Inspired by observations of these multipole asymmetries in terms of two-point correlations and variance in real space, we demonstrate that this model can explain the observed anomalous power asymmetry of the cosmic microwave background (CMB) sky, including its characteristic feature that the dipole dominates over higher multipoles. We test the viability of the model and place approximate constraints on its parameters by using observational values of dipole, quadrupole, and octopole amplitudes of the asymmetry measured by a local-variance estimator. We find that a configuration of the model in which the CMB sphere does not intersect the domain wall during inflation provides a good fit to the data. We further derive analytic expressions for the corrections to the CMB temperature covariance matrix, or angular power spectra, which can be used in future statistical analysis of the model in spherical harmonic space.

  18. Fingerprints of anomalous primordial Universe on the abundance of large scale structures

    SciTech Connect (OSTI)

    Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: abolhasani@ipm.ir E-mail: MohammadHossein.Namjoo@utdallas.edu

    2014-12-01

    We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that the abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A}<0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for MM{sub c}) the enhancement in variance induced from anomalous feature decreases (increases) the abundance of dark matter structures in Universe.

  19. Characterization and estimation of permeability correlation structure from performance data

    SciTech Connect (OSTI)

    Ershaghi, I.; Al-Qahtani, M.

    1997-08-01

    In this study, the influence of permeability structure and correlation length on the system effective permeability and recovery factors of 2-D cross-sectional reservoir models, under waterflood, is investigated. Reservoirs with identical statistical representation of permeability attributes are shown to exhibit different system effective permeability and production characteristics which can be expressed by a mean and variance. The mean and variance are shown to be significantly influenced by the correlation length. Detailed quantification of the influence of horizontal and vertical correlation lengths for different permeability distributions is presented. The effect of capillary pressure, P{sub c1} on the production characteristics and saturation profiles at different correlation lengths is also investigated. It is observed that neglecting P{sub c} causes considerable error at large horizontal and short vertical correlation lengths. The effect of using constant as opposed to variable relative permeability attributes is also investigated at different correlation lengths. Next we studied the influence of correlation anisotropy in 2-D reservoir models. For a reservoir under five-spot waterflood pattern, it is shown that the ratios of breakthrough times and recovery factors of the wells in each direction of correlation are greatly influenced by the degree of anisotropy. In fully developed fields, performance data can aid in the recognition of reservoir anisotropy. Finally, a procedure for estimating the spatial correlation length from performance data is presented. Both the production performance data and the system`s effective permeability are required in estimating the correlation length.

  20. Exploiting Genetic Variation of Fiber Components and Morphology in Juvenile Loblolly Pine

    SciTech Connect (OSTI)

    Chang, Hou-Min; Kadia, John F.; Li, Bailian; Sederoff, Ron

    2005-06-30

    In order to ensure the global competitiveness of the Pulp and Paper Industry in the Southeastern U.S., more wood with targeted characteristics have to be produced more efficiently on less land. The objective of the research project is to provide a molecular genetic basis for tree breeding of desirable traits in juvenile loblolly pine, using a multidisciplinary research approach. We developed micro analytical methods for determine the cellulose and lignin content, average fiber length, and coarseness of a single ring in a 12 mm increment core. These methods allow rapid determination of these traits in micro scale. Genetic variation and genotype by environment interaction (GxE) were studied in several juvenile wood traits of loblolly pine (Pinus taeda L.). Over 1000 wood samples of 12 mm increment cores were collected from 14 full-sib families generated by a 6-parent half-diallel mating design (11-year-old) in four progeny tests. Juvenile (ring 3) and transition (ring 8) for each increment core were analyzed for cellulose and lignin content, average fiber length, and coarseness. Transition wood had higher cellulose content, longer fiber and higher coarseness, but lower lignin than juvenile wood. General combining ability variance for the traits in juvenile wood explained 3 to 10% of the total variance, whereas the specific combining ability variance was negligible or zero. There were noticeable full-sib family rank changes between sites for all the traits. This was reflected in very high specific combining ability by site interaction variances, which explained from 5% (fiber length) to 37% (lignin) of the total variance. Weak individual-tree heritabilities were found for cellulose, lignin content and fiber length at the juvenile and transition wood, except for lignin at the transition wood (0.23). Coarseness had moderately high individual-tree heritabilities at both the juvenile (0.39) and transition wood (0.30). Favorable genetic correlations of volume and stem straightness were found with cellulose content, fiber length and coarseness, suggesting that selection on growth or stem straightness would results in favorable response in chemical wood traits. We have developed a series of methods for application of functional genomics to understanding the molecular basis of traits important to tree breeding for improved chemical and physical properties of wood. Two types of technologies were used, microarray analysis of gene expression, and profiling of soluble metabolites from wood forming tissues. We were able to correlate wood property phenotypes with expression of specific genes and with the abundance of specific metabolites using a new database and appropriate statistical tools. These results implicate a series of candidate genes for cellulose content, lignin content, hemicellulose content and specific extractible metabolites. Future work should integrate such studies in mapping populations and genetic maps to make more precise associations of traits with gene locations in order to increase the predictive power of molecular markers, and to distinguish between different candidate genes associated by linkage or by function. This study has found that loblolly pine families differed significantly for cellulose yield, fiber length, fiber coarseness, and less for lignin content. The implication for forest industry is that genetic testing and selection for these traits is possible and practical. With sufficient genetic variation, we could improve cellulose yield, fiber length, fiber coarseness, and reduce lignin content in Loblolly pine. With the continued progress in molecular research, some candidate genes may be used for selecting cellulose content, lignin content, hemicellulose content and specific extractible metabolites. This would accelerate current breeding and testing program significantly, and produce pine plantations with not only high productivity, but desirable wood properties as well.

  1. Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG

    SciTech Connect (OSTI)

    Risner, Joel M; Johnson, Seth R; Remec, Igor; Bekar, Kursat B

    2015-01-01

    Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst s insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portions of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.

  2. Real-Time Active Cosmic Neutron Background Reduction Methods

    SciTech Connect (OSTI)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Mitchell, Stephen; Guss, Paul

    2013-09-01

    Neutron counting using large arrays of pressurized 3He proportional counters from an aerial system or in a maritime environment suffers from the background counts from the primary cosmic neutrons and secondary neutrons caused by cosmic ray?induced mechanisms like spallation and charge-exchange reaction. This paper reports the work performed at the Remote Sensing LaboratoryAndrews (RSL-A) and results obtained when using two different methods to reduce the cosmic neutron background in real time. Both methods used shielding materials with a high concentration (up to 30% by weight) of neutron-absorbing materials, such as natural boron, to remove the low-energy neutron flux from the cosmic background as the first step of the background reduction process. Our first method was to design, prototype, and test an up-looking plastic scintillator (BC-400, manufactured by Saint Gobain Corporation) to tag the cosmic neutrons and then create a logic pulse of a fixed time duration (~120 ?s) to block the data taken by the neutron counter (pressurized 3He tubes running in a proportional counter mode). The second method examined the time correlation between the arrival of two successive neutron signals to the counting array and calculated the excess of variance (Feynman variance Y2F)1 in the neutron count distribution from Poisson distribution. The dilution of this variance from cosmic background values ideally would signal the presence of man-made neutrons.2 The first method has been technically successful in tagging the neutrons in the cosmic-ray flux and preventing them from being counted in the 3He tube array by electronic vetofield measurement work shows the efficiency of the electronic veto counter to be about 87%. The second method has successfully derived an empirical relationship between the percentile non-cosmic component in a neutron flux and the Y2F of the measured neutron count distribution. By using shielding materials alone, approximately 55% of the neutron flux from man-made sources like 252Cf or Am-Be was removed.

  3. A Sensitivity Study of Radiative Fluxes at the Top of Atmosphere to Cloud-Microphysics and Aerosol Parameters in the Community Atmosphere Model CAM5

    SciTech Connect (OSTI)

    Zhao, Chun; Liu, Xiaohong; Qian, Yun; Yoon, Jin-Ho; Hou, Zhangshuan; Lin, Guang; McFarlane, Sally A.; Wang, Hailong; Yang, Ben; Ma, Po-Lun; Yan, Huiping; Bao, Jie

    2013-11-08

    In this study, we investigated the sensitivity of net radiative fluxes (FNET) at the top of atmosphere (TOA) to 16 selected uncertain parameters mainly related to the cloud microphysics and aerosol schemes in the Community Atmosphere Model version 5 (CAM5). We adopted a quasi-Monte Carlo (QMC) sampling approach to effectively explore the high dimensional parameter space. The output response variables (e.g., FNET) were simulated using CAM5 for each parameter set, and then evaluated using generalized linear model analysis. In response to the perturbations of these 16 parameters, the CAM5-simulated global annual mean FNET ranges from -9.8 to 3.5 W m-2 compared to the CAM5-simulated FNET of 1.9 W m-2 with the default parameter values. Variance-based sensitivity analysis was conducted to show the relative contributions of individual parameter perturbation to the global FNET variance. The results indicate that the changes in the global mean FNET are dominated by those of cloud forcing (CF) within the parameter ranges being investigated. The size threshold parameter related to auto-conversion of cloud ice to snow is confirmed as one of the most influential parameters for FNET in the CAM5 simulation. The strong heterogeneous geographic distribution of FNET variation shows parameters have a clear localized effect over regions where they are acting. However, some parameters also have non-local impacts on FNET variance. Although external factors, such as perturbations of anthropogenic and natural emissions, largely affect FNET variations at the regional scale, their impact is weaker than that of model internal parameters in terms of simulating global mean FNET in this study. The interactions among the 16 selected parameters contribute a relatively small portion of the total FNET variations over most regions of the globe. This study helps us better understand the CAM5 model behavior associated with parameter uncertainties, which will aid the next step of reducing model uncertainty via calibration of uncertain model parameters with the largest sensitivity.

  4. Scaling impacts on environmental controls and spatial heterogeneity of soil organic carbon stocks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mishra, U.; Riley, W. J.

    2015-01-27

    The spatial heterogeneity of land surfaces affects energy, moisture, and greenhouse gas exchanges with the atmosphere. However, representing heterogeneity of terrestrial hydrological and biogeochemical processes in earth system models (ESMs) remains a critical scientific challenge. We report the impact of spatial scaling on environmental controls, spatial structure, and statistical properties of soil organic carbon (SOC) stocks across the US state of Alaska. We used soil profile observations and environmental factors such as topography, climate, land cover types, and surficial geology to predict the SOC stocks at a 50 m spatial scale. These spatially heterogeneous estimates provide a dataset with reasonablemore » fidelity to the observations at a sufficiently high resolution to examine the environmental controls on the spatial structure of SOC stocks. We upscaled both the predicted SOC stocks and environmental variables from finer to coarser spatial scales (s = 100, 200, 500 m, 1, 2, 5, 10 km) and generated various statistical properties of SOC stock estimates. We found different environmental factors to be statistically significant predictors at different spatial scales. Only elevation, temperature, potential evapotranspiration, and scrub land cover types were significant predictors at all scales. The strengths of control (the median value of geographically weighted regression coefficients) of these four environmental variables on SOC stocks decreased with increasing scale and were accurately represented using mathematical functions (R2 = 0.83–0.97). The spatial structure of SOC stocks across Alaska changed with spatial scale. Although the variance (sill) and unstructured variability (nugget) of the calculated variograms of SOC stocks decreased exponentially with scale, the correlation length (range) remained relatively constant across scale. The variance of predicted SOC stocks decreased with spatial scale over the range of 50 to ~ 500 m, and remained constant beyond this scale. The fitted exponential function accounted for 98% of variability in the variance of SOC stocks. We found moderately-accurate linear relationships between mean and higher-order moments of predicted SOC stocks (R2 ~ 0.55–0.63). Current ESMs operate at coarse spatial scales (50–100 km), and are therefore unable to represent environmental controllers and spatial heterogeneity of high-latitude SOC stocks consistent with observations. We conclude that improved understanding of the scaling behavior of environmental controls and statistical properties of SOC stocks can improve ESM land model benchmarking and perhaps allow representation of spatial heterogeneity of biogeochemistry at scales finer than those currently resolved by ESMs.« less

  5. Scaling impacts on environmental controls and spatial heterogeneity of soil organic carbon stocks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mishra, U.; Riley, W. J.

    2015-07-02

    The spatial heterogeneity of land surfaces affects energy, moisture, and greenhouse gas exchanges with the atmosphere. However, representing the heterogeneity of terrestrial hydrological and biogeochemical processes in Earth system models (ESMs) remains a critical scientific challenge. We report the impact of spatial scaling on environmental controls, spatial structure, and statistical properties of soil organic carbon (SOC) stocks across the US state of Alaska. We used soil profile observations and environmental factors such as topography, climate, land cover types, and surficial geology to predict the SOC stocks at a 50 m spatial scale. These spatially heterogeneous estimates provide a data setmore » with reasonable fidelity to the observations at a sufficiently high resolution to examine the environmental controls on the spatial structure of SOC stocks. We upscaled both the predicted SOC stocks and environmental variables from finer to coarser spatial scales (s = 100, 200, and 500 m and 1, 2, 5, and 10 km) and generated various statistical properties of SOC stock estimates. We found different environmental factors to be statistically significant predictors at different spatial scales. Only elevation, temperature, potential evapotranspiration, and scrub land cover types were significant predictors at all scales. The strengths of control (the median value of geographically weighted regression coefficients) of these four environmental variables on SOC stocks decreased with increasing scale and were accurately represented using mathematical functions (R2 = 0.83–0.97). The spatial structure of SOC stocks across Alaska changed with spatial scale. Although the variance (sill) and unstructured variability (nugget) of the calculated variograms of SOC stocks decreased exponentially with scale, the correlation length (range) remained relatively constant across scale. The variance of predicted SOC stocks decreased with spatial scale over the range of 50 m to ~ 500 m, and remained constant beyond this scale. The fitted exponential function accounted for 98 % of variability in the variance of SOC stocks. We found moderately accurate linear relationships between mean and higher-order moments of predicted SOC stocks (R2 ∼ 0.55–0.63). Current ESMs operate at coarse spatial scales (50–100 km), and are therefore unable to represent environmental controllers and spatial heterogeneity of high-latitude SOC stocks consistent with observations. We conclude that improved understanding of the scaling behavior of environmental controls and statistical properties of SOC stocks could improve ESM land model benchmarking and perhaps allow representation of spatial heterogeneity of biogeochemistry at scales finer than those currently resolved by ESMs.« less

  6. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    SciTech Connect (OSTI)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  7. Efficacy of fixed filtration for rapid kVp-switching dual energy x-ray systems

    SciTech Connect (OSTI)

    Yao, Yuan; Wang, Adam S.; Pelc, Norbert J.; Department of Radiology, Stanford University, Stanford, California 94305; Department of Electrical Engineering, Stanford University, Stanford, California 94305

    2014-03-15

    Purpose: Dose efficiency of dual kVp imaging can be improved if the two beams are filtered to remove photons in the common part of their spectra, thereby increasing spectral separation. While there are a number of advantages to rapid kVp-switching for dual energy, it may not be feasible to have two different filters for the two spectra. Therefore, the authors are interested in whether a fixed added filter can improve the dose efficiency of kVp-switching dual energy x-ray systems. Methods: The authors hypothesized that a K-edge filter would provide the energy selectivity needed to remove overlap of the spectra and hence increase the precision of material separation at constant dose. Preliminary simulations were done using calcium and water basis materials and 80 and 140 kVp x-ray spectra. Precision of the decomposition was evaluated based on the propagation of the Poisson noise through the decomposition function. Considering availability and cost, the authors chose a commercial Gd{sub 2}O{sub 2}S screen as the filter for their experimental validation. Experiments were conducted on a table-top system using a phantom with various thicknesses of acrylic and copper and 70 and 125 kVp x-ray spectra. The authors kept the phantom exposure roughly constant with and without filtration by adjusting the tube current. The filtered and unfiltered raw data of both low and high energy were decomposed into basis material and the variance of the decomposition for each thickness pair was calculated. To evaluate the filtration performance, the authors measured the ratio of material decomposition variance with and without filtration. Results: Simulation results show that the ideal filter material depends on the object composition and thickness, and ranges across the lanthanide series, with higher atomic number filters being preferred for more attenuating objects. Variance reduction increases with filter thickness, and substantial reductions (40%) can be achieved with a 2 loss in intensity. The authors experimental results validate the simulations, yet were overall slightly worse than expectation. For large objects, conventional (non-K-edge) beam hardening filters perform well. Conclusions: This study demonstrates the potential of fixed K-edge filtration to improve the dose efficiency and material decomposition precision for rapid kVp-switching dual energy systems.

  8. Factors controlling physico-chemical characteristics in the coastal waters off Mangalore-A multivariate approach

    SciTech Connect (OSTI)

    Shirodkar, P.V. Mesquita, A.; Pradhan, U.K.; Verlekar, X.N.; Babu, M.T.; Vethamony, P.

    2009-04-15

    Water quality parameters (temperature, pH, salinity, DO, BOD, suspended solids, nutrients, PHc, phenols, trace metals-Pb, Cd and Hg, chlorophyll-a (chl-a) and phaeopigments) and the sediment quality parameters (total phosphorous, total nitrogen, organic carbon and trace metals) were analysed from samples collected at 15 stations along 3 transects off Karnataka coast (Mangalore harbour in the south to Suratkal in the north), west coast of India during 2007. The analyses showed high ammonia off Suratkal, high nitrite (NO{sub 2}-N) and nitrate (NO{sub 3}-N) in the nearshore waters off Kulai and high nitrite (NO{sub 2}-N) and ammonia (NH{sub 3}-N) in the harbour area. Similarly, high petroleum hydrocarbon (PHc) values were observed near the harbour, while phenols remained high in the nearshore waters of Kulai and Suratkal. Significantly, high concentrations of cadmium and mercury with respect to the earlier studies were observed off Kulai and harbour regions, respectively. R-mode varimax factor analyses were applied separately to surface and bottom water data sets due to existing stratification in the water column caused by riverine inflow and to sediment data. This helped to understand the interrelationships between the variables and to identify probable source components for explaining the environmental status of the area. Six factors (each for surface and bottom waters) were found responsible for variance (86.9% in surface and 82.4% in bottom) in the coastal waters between Mangalore and Suratkal. In sediments, 4 factors explained 86.8% of the observed total variance. The variances indicated addition of nutrients and suspended solids to the coastal waters due to weathering and riverine transport and are categorized as natural sources. The observed contamination of coastal waters indicated anthropogenic inputs of Cd and phenol from industrial effluent sources at Kulai and Suratkal, ammonia from wastewater discharges off Kulai and harbour, PHc and Hg from boat traffic and harbour activities of New Mangalore harbour. However, the strong seasonal currents and the seasonal winds keep the coastal waters well mixed and aerated, which help to disperse the contaminants, without significantly affecting chlorophyll-a concentrations. The interrelationship between the stations as shown by cluster analyses and depicted in dendograms, categorize the contamination levels sector-wise.

  9. Distribution of Polycyclic Aromatic Hydrocarbons in Soils and Terrestrial Biota After a Spill of Crude Oil in Trecate, Italy

    SciTech Connect (OSTI)

    Brandt, Charles A. ); Becker, James M. ); Porta, Augusto C.

    2001-12-01

    Following a large blowout of crude oil in northern Italy in 1994, the distribution of polyaromatic hydrocarbons (PAHs) was examined over time and space in soils, uncultivated wild vegetation, insects, mice, and frogs in the area. Within 2 y of the blowout, PAH concentrations declined to background levels over much of the area where initial concentrations were within an order of magnitude above background, but had not declined to background in areas where starting concentrations exceeded background by two orders of magnitude. Octanol-water partitioning and extent of alkylation explained much of the variance in uptake of PAHs by plants and animals. Lower Kow PAHs and higher-alkylated PAHs had higher soil-to-biota accumulation factors (BSAFs) than did high-Kow and unalkylated forms. BSAFs for higher Kow PAHs were very low for plants, but much higher for animals, with frogs accumulating more of these compounds than other species.

  10. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

    SciTech Connect (OSTI)

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  11. Assessment of global warming effect on the level of extremes and intra-annual structure

    SciTech Connect (OSTI)

    Lobanov, V.A.

    1997-12-31

    In this research a new approach for the parametrization of intra-annual Variations has been developed that is based on the poly-linear decomposition and relationships with average climate conditions. This method allows to divide the complex intra-annual variations during every year into two main parts: climate and synoptic processes. In this case, the climate process is presented by two coefficients (B1, B0) of linear function between the particular year data and average intra-year conditions over the long-term period. Coefficient B1 is connected with an amplitude of intra-annual function and characterizes the extremes events and BO-coefficient obtaines the level of climate conditions realization in the particular year. The synoptic process is determined as the remainders or errors of every year linear function or their generalized parameter, such as variance.

  12. MEASUREMENT OF THE SHOCK-HEATED MELT CURVE OF LEAD USING PYROMETRY AND REFLECTOMETRY

    SciTech Connect (OSTI)

    D. Partouche-Sebban and J. L. Pelissier, Commissariat a` l'Energie Atomique,; F. G. Abeyta, Los Alamos National Laboratory; W. W. Anderson, Los Alamos National Laboratory; M. E. Byers, Los Alamos National Laboratory; D. Dennis-Koller, Los Alamos National Laboratory; J. S. Esparza, Los Alamos National Laboratory; S. D. Borror, Bechtel Nevada; C. A. Kruschwitz, Bechtel Nevada

    2004-01-01

    Data on the high-pressure melting temperatures of metals is of great interest in several fields of physics including geophysics. Measuring melt curves is difficult but can be performed in static experiments (with laser-heated diamond-anvil cells for instance) or dynamically (i.e., using shock experiments). However, at the present time, both experimental and theoretical results for the melt curve of lead are at too much variance to be considered definitive. As a result, we decided to perform a series of shock experiments designed to provide a measurement of the melt curve of lead up to about 50 GPa in pressure. At the same time, we developed and fielded a new reflectivity diagnostic, using it to make measurements on tin. The results show that the melt curve of lead is somewhat higher than the one previously obtained with static compression and heating techniques.

  13. LIFE ESTIMATION OF HIGH LEVEL WASTE TANK STEEL FOR F-TANK FARM CLOSURE PERFORMANCE ASSESSMENT - 9310

    SciTech Connect (OSTI)

    Subramanian, K; Bruce Wiersma, B; Stephen Harris, S

    2009-01-12

    High level radioactive waste (HLW) is stored in underground carbon steel storage tanks at the Savannah River Site. The underground tanks will be closed by removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations, and severing/sealing external penetrations. The life of the carbon steel materials of construction in support of the performance assessment has been completed. The estimation considered general and localized corrosion mechanisms of the tank steel exposed to grouted conditions. A stochastic approach was followed to estimate the distributions of failures based upon mechanisms of corrosion accounting for variances in each of the independent variables. The methodology and results used for one-type of tank is presented.

  14. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect (OSTI)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B.

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  15. Cyberspace Security Econometrics System (CSES)

    Energy Science and Technology Software Center (OSTI)

    2012-07-27

    Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing complance pressures require careful consideration and adaption. The CSES provides a measure (i.e. a quantitative indication) of reliability, performance, and/or safety of a system that accounts for themore » criticality of each requirement as a function of one or more stakeholders' interests in that requirement. For a given stakeholder, CSES accounts for the variance that may exist among the stakes one attaches to meeting each requirement.« less

  16. Sub-Poissonian statistics in order-to-chaos transition

    SciTech Connect (OSTI)

    Kryuchkyan, Gagik Yu. [Yerevan State University, Manookyan 1, Yerevan 375049, (Armenia); Institute for Physical Research, National Academy of Sciences, Ashtarak-2 378410, (Armenia); Manvelyan, Suren B. [Institute for Physical Research, National Academy of Sciences, Ashtarak-2 378410, (Armenia)

    2003-07-01

    We study the phenomena at the overlap of quantum chaos and nonclassical statistics for the time-dependent model of nonlinear oscillator. It is shown in the framework of Mandel Q parameter and Wigner function that the statistics of oscillatory excitation numbers is drastically changed in the order-to-chaos transition. The essential improvement of sub-Poissonian statistics in comparison with an analogous one for the standard model of driven anharmonic oscillator is observed for the regular operational regime. It is shown that in the chaotic regime, the system exhibits the range of sub-Poissonian and super-Poissonian statistics which alternate one to other depending on time intervals. Unusual dependence of the variance of oscillatory number on the external noise level for the chaotic dynamics is observed. The scaling invariance of the quantum statistics is demonstrated and its relation to dissipation and decoherence is studied.

  17. A Reassessment of the Integrated Impact of Tropical Cyclones on Surface Chlorophyll in the Western Subtropical North Atlantic

    SciTech Connect (OSTI)

    Foltz, Gregory R.; Balaguru, Karthik; Leung, Lai-Yung R.

    2015-02-28

    The impact of tropical cyclones on surface chlorophyll concentration is assessed in the western subtropical North Atlantic Ocean during 19982011. Previous studies in this area focused on individual cyclones and gave mixed results regarding the importance of tropical cyclone-induced mixing for changes in surface chlorophyll. Using a more integrated and comprehensive approach that includes quantification of cyclone-induced changes in mixed layer depth, here it is shown that accumulated cyclone energy explains 22% of the interannual variability in seasonally-averaged (JuneNovember) chlorophyll concentration in the western subtropical North Atlantic, after removing the influence of the North Atlantic Oscillation (NAO). The variance explained by tropical cyclones is thus about 70% of that explained by the NAO, which has well-known impacts in this region. It is therefore likely that tropical cyclones contribute significantly to interannual variations of primary productivity in the western subtropical North Atlantic during the hurricane season.

  18. Spin and orbital ordering in Y1-xLaxVO?

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yan, J.-Q.; Zhou, J.-S.; Cheng, J. G.; Goodenough, J. B.; Ren, Y.; Llobet, A.; McQueeney, R. J.

    2011-12-02

    The spin and orbital ordering in Y1-xLaxVO? (0.30 ? x ? 1.0) has been studied to map out the phase diagram over the whole doping range 0 ? x ? 1. The phase diagram is compared with that for RVO? (R = rare earth or Y) perovskites without A-site variance. For x > 0.20, no long-range orbital ordering was observed above the magnetic ordering temperature TN; the magnetic order is accompanied by a lattice anomaly at a Tt ? TN as in LaVO?. The magnetic ordering below Tt ? TN is G type in the compositional range 0.20 ? xmore? 0.40 and C type in the range 0.738 ? x ? 1.0. Magnetization and neutron powder diffraction measurements point to the coexistence below TN of the two magnetic phases in the compositional range 0.4 N less

  19. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less

  20. Depth of maximum of air-shower profiles at the Pierre Auger Observatory. I. Measurements at energies above $$10^{17.8}$$ eV

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Aab, Alexander

    2014-12-31

    We report a study of the distributions of the depth of maximum, Xmax, of extensive air-shower profiles with energies above 1017.8 eV as observed with the fluorescence telescopes of the Pierre Auger Observatory. The analysis method for selecting a data sample with minimal sampling bias is described in detail as well as the experimental cross-checks and systematic uncertainties. Furthermore, we discuss the detector acceptance and the resolution of the Xmax measurement and provide parametrizations thereof as a function of energy. Finally, the energy dependence of the mean and standard deviation of the Xmax distributions are compared to air-shower simulations formore » different nuclear primaries and interpreted in terms of the mean and variance of the logarithmic mass distribution at the top of the atmosphere.« less

  1. COMMENT ON TRITIUM ABSORPTION-DESORPTION CHARACTERISTICS OF LANI4.25AL0.75

    SciTech Connect (OSTI)

    Walters, T

    2007-04-10

    The thermodynamic data for LaNi{sub 4.25}Al{sub 0.75} tritide, reported by Wang et al. (W.-d. Wang et al., J. Alloys Compd. (2006) doi:10.1016/j.jallcom.206.09.122), are in variance with our published data. The plateau pressures for the P-C-T isotherms at all temperatures are significantly lower than published data. As a result, the derived thermodynamic parameters, {Delta}H{sup o} and {Delta}S{sup o}, are questionable. Using the thermodynamic parameters derived from the data reported by Wang et al. will result in under estimating the expected pressures, and therefore not provide the desired performance for storing and processing tritium.

  2. Deconstructing Solar Photovoltaic Pricing: The Role of Market Structure, Technology and Policy

    Broader source: Energy.gov [DOE]

    Solar photovoltaic (PV) system prices in the United States are considerably different both across geographic locations and within a given location. Variances in price may arise due to state and federal policies, differences in market structure, and other factors that influence demand and costs. This paper examines the relative importance of such factors on the stability of solar PV system prices in the United States using a detailed dataset of roughly 100,000 recent residential and small commercial installations. The paper finds that PV system prices differ based on characteristics of the systems. More interestingly, evidence suggests that search costs and imperfect competition affect solar PV pricing. Installer density substantially lowers prices, while regions with relatively generous financial incentives for solar PV are associated with higher prices.

  3. An optical beam frequency reference with 10{sup -14} range frequency instability

    SciTech Connect (OSTI)

    McFerran, J. J.; Hartnett, J. G.; Luiten, A. N. [School of Physics, University of Western Australia, 35 Stirling Highway, Crawley, 6009 Western Australia (Australia)

    2009-07-20

    The authors report on a thermal beam optical frequency reference with a fractional frequency instability of 9.2x10{sup -14} at 1 s reducing to 2.0x10{sup -14} at 64 s before slowly rising. The {sup 1}S{sub 0}{r_reversible}{sup 3}P{sub 1} intercombination line in neutral {sup 40}Ca is used as a frequency discriminator. A diode laser at 423 nm probes the ground state population after a Ramsey-Borde sequence of 657 nm light-field interactions on the atoms. The measured fractional frequency instability is an order of magnitude improvement on previously reported thermal beam optical clocks. The photon shot-noise of the read-out produces a limiting square root {lambda}-variance of 7x10{sup -14}/{radical}({tau})

  4. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect (OSTI)

    Kreinovich, Vladik; Oberkampf, William Louis; Ginzburg, Lev; Ferson, Scott; Hajagos, Janos

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  5. Challenging the Mean Time to Failure: Measuring Dependability as a Mean Failure Cost

    SciTech Connect (OSTI)

    Sheldon, Frederick T; Mili, Ali

    2009-01-01

    many fronts: it ignores the variance in stakes among stakeholders; it fails to recognize the structure of complex specifications as the aggregate of overlapping requirements; it fails to recognize that different components of the specification carry different stakes, even for the same stakeholder; it fails to recognize that V&V actions have different impacts with respect to the different components of the specification. Similar metrics of security, such as MTTD (Mean Time to Detection) and MTTE (Mean Time to Exploitation) suffer from the same shortcomings. In this paper we advocate a measure of dependability that acknowledges the aggregate structureof complex system specifications, and takes into account variations by stakeholder, by specification components, and by V&V impact.

  6. Dynamical mass generation in unquenched QED using the Dyson-Schwinger equations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kızılersü, Ayse; Sizer, Tom; Pennington, Michael R.; Williams, Anthony G.; Williams, Richard

    2015-03-13

    We present a comprehensive numerical study of dynamical mass generation for unquenched QED in four dimensions, in the absence of four-fermion interactions, using the Dyson-Schwinger approach. We begin with an overview of previous investigations of criticality in the quenched approximation. To this we add an analysis using a new fermion-antifermion-boson interaction ansatz, the Kizilersu-Pennington (KP) vertex, developed for an unquenched treatment. After surveying criticality in previous unquenched studies, we investigate the performance of the KP vertex in dynamical mass generation using a renormalized fully unquenched system of equations. This we compare with the results for two hybrid vertices incorporating themore » Curtis-Pennington vertex in the fermion equation. We conclude that the KP vertex is as yet incomplete, and its relative gauge-variance is due to its lack of massive transverse components in its design.« less

  7. FLUOR HANFORD SAFETY MANAGEMENT PROGRAMS

    SciTech Connect (OSTI)

    GARVIN, L J; JENSEN, M A

    2004-04-13

    This document summarizes safety management programs used within the scope of the ''Project Hanford Management Contract''. The document has been developed to meet the format and content requirements of DOE-STD-3009-94, ''Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses''. This document provides summary descriptions of Fluor Hanford safety management programs, which Fluor Hanford nuclear facilities may reference and incorporate into their safety basis when producing facility- or activity-specific documented safety analyses (DSA). Facility- or activity-specific DSAs will identify any variances to the safety management programs described in this document and any specific attributes of these safety management programs that are important for controlling potentially hazardous conditions. In addition, facility- or activity-specific DSAs may identify unique additions to the safety management programs that are needed to control potentially hazardous conditions.

  8. Unconventional Fermi surface in an insulating state

    SciTech Connect (OSTI)

    Harrison, Neil; Tan, B. S.; Hsu, Y. -T.; Zeng, B.; Hatnean, M. Ciomaga; Zhu, Z.; Hartstein, M.; Kiourlappou, M.; Srivastava, A.; Johannes, M. D.; Murphy, T. P.; Park, J. -H.; Balicas, L.; Lonzarich, G. G.; Balakrishnan, G.; Sebastian, Suchitra E.

    2015-07-17

    Insulators occur in more than one guise; a recent finding was a class of topological insulators, which host a conducting surface juxtaposed with an insulating bulk. Here, we report the observation of an unusual insulating state with an electrically insulating bulk that simultaneously yields bulk quantum oscillations with characteristics of an unconventional Fermi liquid. We present quantum oscillation measurements of magnetic torque in high-purity single crystals of the Kondo insulator SmB6, which reveal quantum oscillation frequencies characteristic of a large three-dimensional conduction electron Fermi surface similar to the metallic rare earth hexaborides such as PrB6 and LaB6. As a result, the quantum oscillation amplitude strongly increases at low temperatures, appearing strikingly at variance with conventional metallic behavior.

  9. Capabilities, Implementation, and Benchmarking of Shift, a Massively Parallel Monte Carlo Radiation Transport Code

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T

    2016-01-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less

  10. On the local variation of the Hubble constant

    SciTech Connect (OSTI)

    Odderskov, Io; Hannestad, Steen [Department of Physics and Astronomy, University of Aarhus, DK-8000 Aarhus C (Denmark); Haugblle, Troels, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: troels.haugboelle@snm.ku.dk [Centre for Star and Planet Formation, Natural History Museum of Denmark and Niels Bohr Institute University of Copenhagen, DK-1350 Copenhagen (Denmark)

    2014-10-01

    We have carefully studied how local measurements of the Hubble constant, H{sub 0}, can be influenced by a variety of different parameters related to survey depth, size, and fraction of the sky observed, as well as observer position in space. Our study is based on N-body simulations of structure in the standard ?CDM model and our conclusion is that the expected variance in measurements of H{sub 0} is far too small to explain the current discrepancy between the low value of H{sub 0} inferred from measurements of the cosmic microwave background (CMB) by the Planck collaboration and the value measured directly in the local universe by use of Type Ia supernovae. This conclusion is very robust and does not change with different assumptions about effective sky coverage and depth of the survey or observer position in space.

  11. Method and computer product to increase accuracy of time-based software verification for sensor networks

    DOE Patents [OSTI]

    Foo Kune, Denis; Mahadevan, Karthikeyan

    2011-01-25

    A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.

  12. Economic and environmental impacts of proposed changes to Clean Water Act thermal discharge requirements

    SciTech Connect (OSTI)

    Veil, J.A.

    1994-06-01

    This paper examines the economic and environmental impact to the power industry of limiting thermal mixing zones to 1000 feet and eliminating the Clean Water Act {section}316(a) variance. Power companies were asked what they would do if these two conditions were imposed. Most affected plants would retrofit cooling towers and some would retrofit diffusers. Assuming that all affected plants would proportionally follow the same options as the surveyed plants, the estimated capital cost of retrofitting cooling towers or diffusers at all affected plants exceeds $20 billion. Since both cooling towers and diffusers exert an energy penalty on a plant`s output, the power companies must generate additional power. The estimated cost of the additional power exceeds $10 billion over 20 years. Generation of the extra power would emit over 8 million tons per year of additional carbon dioxide. Operation of the new cooling towers would cause more than 1.5 million gallons per minute of additional evaporation.

  13. Conduction band offset at GeO{sub 2}/Ge interface determined by internal photoemission and charge-corrected x-ray photoelectron spectroscopies

    SciTech Connect (OSTI)

    Zhang, W. F.; Nishimula, T.; Nagashio, K.; Kita, K.; Toriumi, A.

    2013-03-11

    We report a consistent conduction band offset (CBO) at a GeO{sub 2}/Ge interface determined by internal photoemission spectroscopy (IPE) and charge-corrected X-ray photoelectron spectroscopy (XPS). IPE results showed that the CBO value was larger than 1.5 eV irrespective of metal electrode and substrate type variance, while an accurate determination of valence band offset (VBO) by XPS requires a careful correction of differential charging phenomena. The VBO value was determined to be 3.60 {+-} 0.2 eV by XPS after charge correction, thus yielding a CBO (1.60 {+-} 0.2 eV) in excellent agreement with the IPE results. Such a large CBO (>1.5 eV) confirmed here is promising in terms of using GeO{sub 2} as a potential passivation layer for future Ge-based scaled CMOS devices.

  14. Detection limits for real-time source water monitoring using indigenous freshwater microalgae

    SciTech Connect (OSTI)

    Rodriguez Jr, Miguel; Greenbaum, Elias

    2009-01-01

    This research identified toxin detection limits using the variable fluorescence of naturally occurring microalgae in source drinking water for five chemical toxins with different molecular structures and modes of toxicity. The five chemicals investigated were atrazine, Diuron, paraquat, methyl parathion, and potassium cyanide. Absolute threshold sensitivities of the algae for detection of the toxins in unmodified source drinking water were measured. Differential kinetics between the rate of action of the toxins and natural changes in algal physiology, such as diurnal photoinhibition, are significant enough that effects of the toxin can be detected and distinguished from the natural variance. This is true even for physiologically impaired algae where diminished photosynthetic capacity may arise from uncontrollable external factors such as nutrient starvation. Photoinhibition induced by high levels of solar radiation is a predictable and reversible phenomenon that can be dealt with using a period of dark adaption of 30 minutes or more.

  15. Kalman filter data assimilation: Targeting observations and parameter estimation

    SciTech Connect (OSTI)

    Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  16. High-precision calculation of the strange nucleon electromagnetic form factors

    SciTech Connect (OSTI)

    Green, Jeremy; Meinel, Stefan; Engelhardt, Michael G.; Krieg, Stefan; Laeuchli, Jesse; Negele, John W.; Orginos, Kostas; Pochinsky, Andrew; Syritsyn, Sergey

    2015-08-01

    We report a direct lattice QCD calculation of the strange nucleon electromagnetic form factors GsE and GsM in the kinematic range 0 lte Q2 \\lessapprox 1.2GeV2. For the first time, both GsE and GsM are shown to be nonzero with high significance. This work uses closer-to-physical lattice parameters than previous calculations, and achieves an unprecented statistical precision by implementing a recently proposed variance reduction technique called hierarchical probing. We perform model-independent fits of the form factor shapes using the z-expansion and determine the strange electric and magnetic radii and magnetic moment. We compare our results to parity-violating electron-proton scattering data and to other theoretical studies.

  17. Radium/Barium Waste Project

    SciTech Connect (OSTI)

    McDowell, Allen K.; Ellefson, Mark D.; McDonald, Kent M.

    2015-06-25

    The treatment, shipping, and disposal of a highly radioactive radium/barium waste stream have presented a complex set of challenges requiring several years of effort. The project illustrates the difficulty and high cost of managing even small quantities of highly radioactive Resource Conservation and Recovery Act (RCRA)-regulated waste. Pacific Northwest National Laboratory (PNNL) research activities produced a Type B quantity of radium chloride low-level mixed waste (LLMW) in a number of small vials in a facility hot cell. The resulting waste management project involved a mock-up RCRA stabilization treatment, a failed in-cell treatment, a second, alternative RCRA treatment approach, coordinated regulatory variances and authorizations, alternative transportation authorizations, additional disposal facility approvals, and a final radiological stabilization process.

  18. Mixing in thermally stratified nonlinear spin-up with uniform boundary fluxes

    SciTech Connect (OSTI)

    Baghdasarian, Meline; Pacheco-Vega, Arturo; Pacheco, J. Rafael; Verzicco, Roberto

    2014-09-15

    Studies of stratified spin-up experiments in enclosed cylinders have reported the presence of small pockets of well-mixed fluids but quantitative measurements of the mixedness of the fluid has been lacking. Previous numerical simulations have not addressed these measurements. Here we present numerical simulations that explain how the combined effect of spin-up and thermal boundary conditions enhances or hinders mixing of a fluid in a cylinder. The energy of the system is characterized by splitting the potential energy into diabatic and adiabatic components, and measurements of efficiency of mixing are based on both, the ratio of dissipation of available potential energy to forcing and variance of temperature. The numerical simulations of the NavierStokes equations for the problem with different sets of thermal boundary conditions at the horizontal walls helped shed some light on the physical mechanisms of mixing, for which a clear explanation was absent.

  19. Thermal properties of Ni-substituted LaCoO{sub 3} perovskite

    SciTech Connect (OSTI)

    Thakur, Rasna Thakur, Rajesh K. Gaur, N. K.; Srivastava, Archana

    2014-04-24

    With the objective of exploring the unknown thermodynamic behavior of LaCo{sub 1?x}Ni{sub x}O{sub 3} family, we present here an investigation of the temperature-dependent (10K ? T ? 300K) thermodynamic properties of LaCo{sub 1?x}Ni{sub x}O{sub 3} (x=0.1, 0.3, 0.5). The specific heat of LaCoO3 with Ni doping in the perovskite structure at B-site has been studied by means of a Modified Rigid Ion Model (MRIM). This replacement introduces large cation variance at B-site hence the specific heat increases appreciably. We report here probably for the first time the cohesive energy, Reststrahlen frequency (?) and Debye temperature (?{sub D}) of LaCo{sub 1?x}Ni{sub x}O{sub 3} compounds.

  20. Cyberspace Security Econometrics System (CSES) - U.S. Copyright TXu 1-901-039

    SciTech Connect (OSTI)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T; Lantz, Margaret W; Hauser, Katie R

    2014-01-01

    Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation. The Cyberspace Security Econometrics System (CSES) provides a measure (i.e., a quantitative indication) of reliability, performance, and/or safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. For a given stakeholder, CSES accounts for the variance that may exist among the stakes one attaches to meeting each requirement. The basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural and mathematical underpinnings contained in this copyright.

  1. Analysis of turbulent transport and mixing in transitional Rayleigh–Taylor unstable flow using direct numerical simulation data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schilling, Oleg; Mueschke, Nicholas J.

    2010-10-18

    Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmore » and destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. Thus, these results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.« less

  2. Model for spectral and chromatographic data

    DOE Patents [OSTI]

    Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA

    2002-11-26

    A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.

  3. Enhanced pinning in mixed rare earth-123 films

    DOE Patents [OSTI]

    Driscoll, Judith L. (Los Alamos, NM); Foltyn, Stephen R. (Los Alamos, NM)

    2009-06-16

    An superconductive article and method of forming such an article is disclosed, the article including a substrate and a layer of a rare earth barium cuprate film upon the substrate, the rare earth barium cuprate film including two or more rare earth metals capable of yielding a superconductive composition where ion size variance between the two or more rare earth metals is characterized as greater than zero and less than about 10.times.10.sup.-4, and the rare earth barium cuprate film including two or more rare earth metals is further characterized as having an enhanced critical current density in comparison to a standard YBa.sub.2Cu.sub.3O.sub.y composition under identical testing conditions.

  4. Intrinsic fluctuations of dust grain charge in multi-component plasmas

    SciTech Connect (OSTI)

    Shotorban, B.

    2014-03-15

    A master equation is formulated to model the states of the grain charge in a general multi-component plasma, where there are electrons and various kinds of positive or negative ions that are singly or multiply charged. A Fokker-Planck equation is developed from the master equation through the system-size expansion method. The Fokker-Planck equation has a Gaussian solution with a mean and variance governed by two initial-value differential equations involving the rates of the attachment of ions and electrons to the dust grain. Also, a Langevin equation and a discrete stochastic method are developed to model the time variation of the grain charge. Grain charging in a plasma containing electrons, protons, and alpha particles with Maxwellian distributions is considered as an example problem. The Gaussian solution is in very good agreement with the master equation solution numerically obtained for this problem.

  5. System for monitoring non-coincident, nonstationary process signals

    DOE Patents [OSTI]

    Gross, Kenneth C.; Wegerich, Stephan W.

    2005-01-04

    An improved system for monitoring non-coincident, non-stationary, process signals. The mean, variance, and length of a reference signal is defined by an automated system, followed by the identification of the leading and falling edges of a monitored signal and the length of the monitored signal. The monitored signal is compared to the reference signal, and the monitored signal is resampled in accordance with the reference signal. The reference signal is then correlated with the resampled monitored signal such that the reference signal and the resampled monitored signal are coincident in time with each other. The resampled monitored signal is then compared to the reference signal to determine whether the resampled monitored signal is within a set of predesignated operating conditions.

  6. Studies of Cosmic Ray Composition and Air Shower Structure with the Pierre Auger Observatory

    SciTech Connect (OSTI)

    Abraham, : J.; Abreu, P.; Aglietta, M.; Aguirre, C.; Ahn, E.J.; Allard, D.; Allekotte, I.; Allen, J.; Alvarez-Muniz, J.; Ambrosio, M.; Anchordoqui, L.

    2009-06-01

    These are presentations to be presented at the 31st International Cosmic Ray Conference, in Lodz, Poland during July 2009. It consists of the following presentations: (1) Measurement of the average depth of shower maximum and its fluctuations with the Pierre Auger Observatory; (2) Study of the nuclear mass composition of UHECR with the surface detectors of the Pierre Auger Observatory; (3) Comparison of data from the Pierre Auger Observatory with predictions from air shower simulations: testing models of hadronic interactions; (4) A Monte Carlo exploration of methods to determine the UHECR composition with the Pierre Auger Observatory; (5) The delay of the start-time measured with the Pierre Auger Observatory for inclined showers and a comparison of its variance with models; (6) UHE neutrino signatures in the surface detector of the Pierre Auger Observatory; and (7) The electromagnetic component of inclined air showers at the Pierre Auger Observatory.

  7. Multilevel Monte Carlo for two phase flow and BuckleyLeverett transport in random heterogeneous porous media

    SciTech Connect (OSTI)

    Mller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and BuckleyLeverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  8. Fabrication of FCC-SiO{sub 2} colloidal crystals using the vertical convective self-assemble method

    SciTech Connect (OSTI)

    Castaeda-Uribe, O. A.; Salcedo-Reyes, J. C.; Mndez-Pinzn, H. A.; Pedroza-Rodrguez, A. M.

    2014-05-15

    In order to determine the optimal conditions for the growth of high-quality 250 nm-SiO{sub 2} colloidal crystals by the vertical convective self-assemble method, the Design of Experiments (DoE) methodology is applied. The influence of the evaporation temperature, the volume fraction, and the pH of the colloidal suspension is studied by means of an analysis of variance (ANOVA) in a 3{sup 3} factorial design. Characteristics of the stacking lattice of the resulting colloidal crystals are determined by scanning electron microscopy and angle-resolved transmittance spectroscopy. Quantitative results from the statistical test show that the temperature is the most critical factor influencing the quality of the colloidal crystal, obtaining highly ordered structures with FCC stacking lattice at a growth temperature of 40C.

  9. ShowMe3D

    Energy Science and Technology Software Center (OSTI)

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  10. Influence of quasiparticle multi-tunneling on the energy flow through the superconducting tunnel junction

    SciTech Connect (OSTI)

    Samedov, V. V.; Tulinov, B. M.

    2011-07-01

    Superconducting tunnel junction (STJ) detector consists of two layers of superconducting material separated by thin insulating barrier. An incident particle produces in superconductor excess nonequilibrium quasiparticles. Each quasiparticle in superconductor should be considered as quantum superposition of electron-like and hole-like excitations. This duality nature of quasiparticle leads to the effect of multi-tunneling. Quasiparticle starts to tunnel back and forth through the insulating barrier. After tunneling from biased electrode quasiparticle loses its energy via phonon emission. Eventually, the energy that equals to the difference in quasiparticle energy between two electrodes is deposited in the signal electrode. Because of the process of multi-tunneling, one quasiparticle can deposit energy more than once. In this work, the theory of branching cascade processes was applied to the process of energy deposition caused by the quasiparticle multi-tunneling. The formulae for the mean value and variance of the energy transferred by one quasiparticle into heat were derived. (authors)

  11. Characterizing cemented TRU waste for RCRA hazardous constituents

    SciTech Connect (OSTI)

    Yeamans, D.R.; Betts, S.E.; Bodenstein, S.A. [and others

    1996-06-01

    Los Alamos National Laboratory (LANL) has characterized drums of solidified transuranic (TRU) waste from four major waste streams. The data will help the State of New Mexico determine whether or not to issue a no-migration variance of the Waste Isolation Pilot Plant (WIPP) so that WIPP can receive and dispose of waste. The need to characterize TRU waste stored at LANL is driven by two additional factors: (1) the LANL RCRA Waste Analysis Plan for EPA compliant safe storage of hazardous waste; (2) the WIPP Waste Acceptance Criteria (WAC) The LANL characterization program includes headspace gas analysis, radioassay and radiography for all drums and solids sampling on a random selection of drums from each waste stream. Data are presented showing that the only identified non-metal RCRA hazardous component of the waste is methanol.

  12. Using star tracks to determine the absolute pointing of the Fluorescence Detector telescopes of the Pierre Auger Observatory

    SciTech Connect (OSTI)

    De Donato, Cinzia; Sanchez, Federico; Santander, Marcos; Natl.Tech.U., San Rafael; Camin, Daniel; Garcia, Beatriz; Grassi, Valerio; /Milan U. /INFN, Milan

    2005-05-01

    To accurately reconstruct a shower axis from the Fluorescence Detector data it is essential to establish with high precision the absolute pointing of the telescopes. To d that they calculate the absolute pointing of a telescope using sky background data acquired during regular data taking periods. The method is based on the knowledge of bright star's coordinates that provide a reliable and stable coordinate system. it can be used to check the absolute telescope's pointing and its long-term stability during the whole life of the project, estimated in 20 years. They have analyzed background data taken from January to October 2004 to determine the absolute pointing of the 12 telescopes installed both in Los Leones and Coihueco. The method is based on the determination of the mean-time of the variance signal left by a star traversing a PMT's photocathode which is compared with the mean-time obtained by simulating the track of that star on the same pixel.

  13. Method and apparatus for detection of chemical vapors

    DOE Patents [OSTI]

    Mahurin, Shannon Mark (Knoxville, TN); Dai, Sheng (Knoxville, TN); Caja, Josip (Knoxville, TN)

    2007-05-15

    The present invention is a gas detector and method for using the gas detector for detecting and identifying volatile organic and/or volatile inorganic substances present in unknown vapors in an environment. The gas detector comprises a sensing means and a detecting means for detecting electrical capacitance variance of the sensing means and for further identifying the volatile organic and volatile inorganic substances. The sensing means comprises at least one sensing unit and a sensing material allocated therein the sensing unit. The sensing material is an ionic liquid which is exposed to the environment and is capable of dissolving a quantity of said volatile substance upon exposure thereto. The sensing means constitutes an electrochemical capacitor and the detecting means is in electrical communication with the sensing means.

  14. Application of Entry-Time Processes to Asset Management in Nuclear Power Plants

    SciTech Connect (OSTI)

    Nelson, Paul; Wang, Shuwen; Kee, Ernie J.

    2006-07-01

    The entry-time approach to dynamic reliability is based upon computational solution of the Chapman-Kolmogorov (generalized state-transition) equations underlying a certain class of marked point processes. Previous work has verified a particular finite-difference approach to computational solution of these equations. The objective of this work is to illustrate the potential application of the entry-time approach to risk-informed asset management (RIAM) decisions regarding maintenance or replacement of major systems within a plant. Results are presented in the form of plots, with replacement/maintenance period as a parameter, of expected annual revenue, along with annual variance and annual skewness as indicators of associated risks. Present results are for a hypothetical system, to illustrate the capability of the approach, but some considerations related to potential application of this approach to nuclear power plants are discussed. (authors)

  15. Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report

    SciTech Connect (OSTI)

    Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin

    2014-12-01

    This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled Validated Models for Radiation Response and Signal Generation in Scintillators (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).

  16. Seasonal cycle dependence of temperature fluctuations in the atmosphere. Master's thesis

    SciTech Connect (OSTI)

    Tobin, B.F.

    1994-08-01

    The correlation statistics of meteorological fields have been of interest in weather forecasting for many years and are also of interest in climate studies. A better understanding of the seasonal variation of correlation statistics can be used to determine how the seasonal cycle of temperature fluctuations should be simulated in noise-forced energy balance models. It is shown that the length scale does have a seasonal dependence and will have to be handled through the seasonal modulation of other coefficients in noise-forced energy balance models. The temperature field variance and spatial correlation fluctuations exhibit seasonality with fluctuation amplitudes larger in the winter hemisphere and over land masses. Another factor contributing to seasonal differences is the larger solar heating gradient in the winter.

  17. Doppler Lidar Vertical Velocity Statistics Value-Added Product

    SciTech Connect (OSTI)

    Newsom, RK; Sivaraman, C; Shippert, TR; Riihimaki, LD

    2015-07-01

    fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosis from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.

  18. Temporary Cementitious Sealers in Enhanced Geothermal Systems

    SciTech Connect (OSTI)

    Sugama T.; Pyatina, T.; Butcher, T.; Brothers, L.; Bour, D.

    2011-12-31

    Unlike conventional hydrothennal geothermal technology that utilizes hot water as the energy conversion resources tapped from natural hydrothermal reservoir located at {approx}10 km below the ground surface, Enhanced Geothermal System (EGS) must create a hydrothermal reservoir in a hot rock stratum at temperatures {ge}200 C, present in {approx}5 km deep underground by employing hydraulic fracturing. This is the process of initiating and propagating a fracture as well as opening pre-existing fractures in a rock layer. In this operation, a considerable attention is paid to the pre-existing fractures and pressure-generated ones made in the underground foundation during drilling and logging. These fractures in terms of lost circulation zones often cause the wastage of a substantial amount of the circulated water-based drilling fluid or mud. Thus, such lost circulation zones must be plugged by sealing materials, so that the drilling operation can resume and continue. Next, one important consideration is the fact that the sealers must be disintegrated by highly pressured water to reopen the plugged fractures and to promote the propagation of reopened fractures. In response to this need, the objective of this phase I project in FYs 2009-2011 was to develop temporary cementitious fracture sealing materials possessing self-degradable properties generating when {ge} 200 C-heated scalers came in contact with water. At BNL, we formulated two types of non-Portland cementitious systems using inexpensive industrial by-products with pozzolanic properties, such as granulated blast-furnace slag from the steel industries, and fly ashes from coal-combustion power plants. These byproducts were activated by sodium silicate to initiate their pozzolanic reactions, and to create a cemetitious structure. One developed system was sodium silicate alkali-activated slag/Class C fly ash (AASC); the other was sodium silicate alkali-activated slag/Class F fly ash (AASF) as the binder of temper-try sealers. Two specific additives without sodium silicate as alkaline additive were developed in this project: One additive was the sodium carboxymethyl cellulose (CMC) as self-degradation promoting additive; the other was the hard-burned magnesium oxide (MgO) made from calcinating at 1,000-1,500 C as an expansive additive. The AASC and AASF cementitious sealers made by incorporating an appropriate amount of these additives met the following six criteria: 1) One dry mix component product; 2) plastic viscosity, 20 to 70 cp at 300 rpm; 3) maintenance of pumpability for at least 1 hour at 85 C; 4) compressive strength >2000 psi; 5) self-degradable by injection with water at a certain pressure; and 6) expandable and swelling properties; {ge}0.5% of total volume of the sealer.

  19. Dependence of liquefaction behavior on coal characteristics. Part VI. Relationship of liquefaction behavior of a set of high sulfur coals to chemical structural characteristics. Final technical report, March 1981 to February 1984

    SciTech Connect (OSTI)

    Neill, P. H.; Given, P. H.

    1984-09-01

    The initial aim of this research was to use empirical mathematical relationships to formulate a better understanding of the processes involved in the liquefaction of a set of medium rank high sulfur coals. In all, just over 50 structural parameters and yields of product classes were determined. In order to gain a more complete understanding of the empirical relationships between the various properties, a number of relatively complex statistical procedures and tests were applied to the data, mostly selected from the field of multivariate analysis. These can be broken down into two groups. The first group included grouping techniques such as non-linear mapping, hierarchical and tree clustering, and linear discriminant analyses. These techniques were utilized in determining if more than one statistical population was present in the data set; it was concluded that there was not. The second group of techniques included factor analysis and stepwise multivariate linear regressions. Linear discriminant analyses were able to show that five distinct groups of coals were represented in the data set. However only seven of the properties seemed to follow this trend. The chemical property that appeared to follow the trend most closely was the aromaticity, where a series of five parallel straight lines was observed for a plot of f/sub a/ versus carbon content. The factor patterns for each of the product classes indicated that although each of the individual product classes tended to load on factors defined by specific chemical properties, the yields of the broader product classes, such as total conversion to liquids + gases and conversion to asphaltenes, tended to load largely on factors defined by rank. The variance explained and the communalities tended to be relatively low. Evidently important sources of variance have still to be found.

  20. Observations of the scale-dependent turbulence and evaluation of the flux-gradient relationship for sensible heat for a closed Douglas-Fir canopy in very weak wind conditions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vickers, D.; Thomas, C.

    2014-05-13

    Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore »momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less

  1. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    SciTech Connect (OSTI)

    Ensslin, Torsten A.; Frommert, Mona [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.

  2. Transit light curves with finite integration time: Fisher information analysis

    SciTech Connect (OSTI)

    Price, Ellen M.; Rogers, Leslie A.

    2014-10-10

    Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/?eprice.

  3. Probabilistic cost estimation methods for treatment of water extracted during CO2 storage and EOR

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Graham, Enid J. Sullivan; Chu, Shaoping; Pawar, Rajesh J.

    2015-08-08

    Extraction and treatment of in situ water can minimize risk for large-scale CO2 injection in saline aquifers during carbon capture, utilization, and storage (CCUS), and for enhanced oil recovery (EOR). Additionally, treatment and reuse of oil and gas produced waters for hydraulic fracturing will conserve scarce fresh-water resources. Each treatment step, including transportation and waste disposal, generates economic and engineering challenges and risks; these steps should be factored into a comprehensive assessment. We expand the water treatment model (WTM) coupled within the sequestration system model CO2-PENS and use chemistry data from seawater and proposed injection sites in Wyoming, to demonstratemore » the relative importance of different water types on costs, including little-studied effects of organic pretreatment and transportation. We compare the WTM with an engineering water treatment model, utilizing energy costs and transportation costs. Specific energy costs for treatment of Madison Formation brackish and saline base cases and for seawater compared closely between the two models, with moderate differences for scenarios incorporating energy recovery. Transportation costs corresponded for all but low flow scenarios (<5000 m3/d). Some processes that have high costs (e.g., truck transportation) do not contribute the most variance to overall costs. Other factors, including feed-water temperature and water storage costs, are more significant contributors to variance. These results imply that the WTM can provide good estimates of treatment and related process costs (AACEI equivalent level 5, concept screening, or level 4, study or feasibility), and the complex relationships between processes when extracted waters are evaluated for use during CCUS and EOR site development.« less

  4. Investigation of advanced UQ for CRUD prediction with VIPRE.

    SciTech Connect (OSTI)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinement for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.

  5. Probabilistic cost estimation methods for treatment of water extracted during CO2 storage and EOR

    SciTech Connect (OSTI)

    Graham, Enid J. Sullivan; Chu, Shaoping; Pawar, Rajesh J.

    2015-08-08

    Extraction and treatment of in situ water can minimize risk for large-scale CO2 injection in saline aquifers during carbon capture, utilization, and storage (CCUS), and for enhanced oil recovery (EOR). Additionally, treatment and reuse of oil and gas produced waters for hydraulic fracturing will conserve scarce fresh-water resources. Each treatment step, including transportation and waste disposal, generates economic and engineering challenges and risks; these steps should be factored into a comprehensive assessment. We expand the water treatment model (WTM) coupled within the sequestration system model CO2-PENS and use chemistry data from seawater and proposed injection sites in Wyoming, to demonstrate the relative importance of different water types on costs, including little-studied effects of organic pretreatment and transportation. We compare the WTM with an engineering water treatment model, utilizing energy costs and transportation costs. Specific energy costs for treatment of Madison Formation brackish and saline base cases and for seawater compared closely between the two models, with moderate differences for scenarios incorporating energy recovery. Transportation costs corresponded for all but low flow scenarios (<5000 m3/d). Some processes that have high costs (e.g., truck transportation) do not contribute the most variance to overall costs. Other factors, including feed-water temperature and water storage costs, are more significant contributors to variance. These results imply that the WTM can provide good estimates of treatment and related process costs (AACEI equivalent level 5, concept screening, or level 4, study or feasibility), and the complex relationships between processes when extracted waters are evaluated for use during CCUS and EOR site development.

  6. Quality by design in the nuclear weapons complex

    SciTech Connect (OSTI)

    Ikle, D.N.

    1988-04-01

    Modern statistical quality control has evolved beyond the point at which control charts and sampling plans are sufficient to maintain a competitive position. The work of Genichi Taguchi in the early 1970's has inspired a renewed interest in the application of statistical methods of experimental design at the beginning of the manufacturing cycle. While there has been considerable debate over the merits of some of Taguchi's statistical methods, there is increasing agreement that his emphasis on cost and variance reduction is sound. The key point is that manufacturing processes can be optimized in development before they get to production by identifying a region in the process parameter space in which the variance of the process is minimized. Therefore, for performance characteristics having a convex loss function, total product cost is minimized without substantially increasing the cost of production. Numerous examples of the use of this approach in the United States and elsewhere are available in the literature. At the Rocky Flats Plant, where there are severe constraints on the resources available for development, a systematic development strategy has been developed to make efficient use of those resources to statistically characterize critical production processes before they are introduced into production. This strategy includes the sequential application of fractional factorial and response surface designs to model the features of critical processes as functions of both process parameters and production conditions. This strategy forms the basis for a comprehensive quality improvement program that emphasizes prevention of defects throughout the product cycle. It is currently being implemented on weapons programs in development at Rocky Flats and is in the process of being applied at other production facilities in the DOE weapons complex. 63 refs.

  7. TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT

    SciTech Connect (OSTI)

    Niu, T; Dong, X; Petrongolo, M; Zhu, L

    2014-06-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.

  8. SU-F-18C-15: Model-Based Multiscale Noise Reduction On Low Dose Cone Beam Projection

    SciTech Connect (OSTI)

    Yao, W; Farr, J

    2014-06-15

    Purpose: To improve image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low dose cone beam CT (CBCT) imaging systems, Poisson process governs the randomness of photon fluence at x-ray source and the detector because of the independent binomial process of photon absorption in medium. On a CBCT projection, the variance of fluence consists of the variance of noiseless imaging structure and that of Poisson noise, which is proportional to the mean (noiseless) of the fluence at the detector. This requires multiscale filters to smoothen noise while keeping the structure information of the imaged object. We used a mathematical model of Poisson process to design multiscale filters and established the balance of noise correction and structure blurring. The algorithm was checked with low dose kilo-voltage CBCT projections acquired from a Varian OBI system. Results: From the investigation of low dose CBCT of a Catphan phantom and patients, it showed that our model-based multiscale technique could efficiently reduce noise and meanwhile keep the fine structure of the imaged object. After the image processing, the number of visible line pairs in Catphan phantom scanned with 4 ms pulse time was similar to that scanned with 32 ms, and soft tissue structure from simulated 4 ms patient head-and-neck images was also comparable with scanned 20 ms ones. Compared with fixed-scale technique, the image quality from multiscale one was improved. Conclusion: Use of projection-specific multiscale filters can reach better balance on noise reduction and structure information loss. The image quality of low dose CBCT can be improved by using multiscale filters.

  9. On the reliability of microvariability tests in quasars

    SciTech Connect (OSTI)

    De Diego, Jos A.

    2014-11-01

    Microvariations probe the physics and internal structure of quasars. Unpredictability and small flux variations make this phenomenon elusive and difficult to detect. Variance-based probes such as the C and F tests, or a combination of both, are popular methods to compare the light curves of the quasar and a comparison star. Recently, detection claims in some studies have depended on the agreement of the results of the C and F tests, or of two instances of the F-test, for rejecting the non-variation null hypothesis. However, the C-test is a non-reliable statistical procedure, the F-test is not robust, and the combination of tests with concurrent results is anything but a straightforward methodology. A priori power analysis calculations and post hoc analysis of Monte Carlo simulations show excellent agreement for the analysis of variance test to detect microvariations as well as the limitations of the F-test. Additionally, the combined tests yield correlated probabilities that make the assessment of statistical significance unworkable. However, it is possible to include data from several field stars to enhance the power in a single F-test, increasing the reliability of the statistical analysis. This would be the preferred methodology when several comparison stars are available. An example using two stars and the enhanced F-test is presented. These results show the importance of using adequate methodologies and avoiding inappropriate procedures that can jeopardize microvariability detections. Power analysis and Monte Carlo simulations are useful tools for research planning, as they can demonstrate the robustness and reliability of different research approaches.

  10. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect (OSTI)

    Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-coupled- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.

  11. Bulk Data Mover

    Energy Science and Technology Software Center (OSTI)

    2011-01-03

    Bulk Data Mover (BDM) is a high-level data transfer management tool. BDM handles the issue of large variance in file sizes and a big portion of small files by managing the file transfers with optimized transfer queue and concurrency management algorithms. For example, climate simulation data sets are characterized by large volume of files with extreme variance in file sizes. The BDN achieves high performance using a variety of techniques, including multi-thraded concurrent transfer connections,more » data channel caching, load balancing over multiple transfer servers, and storage i/o pre-fetching. Logging information from the BDM is collected and analyzed to study the effectiveness of the transfer management algorithms. The BDM can accept a request composed of multiple files or an entire directory. The request also contains the target site and directory where the replicated files will reside. If a directory is provided at the source, then the BDM will replicate the structure of the source directory at the target site. The BDM is capable of transferring multiple files concurrently as well as using parallel TCP streams. The optimal level of concurrency or parallel streams depends on the bandwidth capacity of the storage systems at both ends of the transfer as well as achievable bandwidth of the wide-area network. Hardware req.-PC, MAC, Multi-platform & Workstation; Software req.: Compile/version-Java 1.50_x or ablove; Type of files: source code, executable modules, installation instructions other, user guide; URL: http://sdm.lbl.gov/bdm/« less

  12. TYPE Ia SUPERNOVA REMNANT SHELL AT z = 3.5 SEEN IN THE THREE SIGHTLINES TOWARD THE GRAVITATIONALLY LENSED QSO B1422+231

    SciTech Connect (OSTI)

    Hamano, Satoshi; Kobayashi, Naoto [Institute of Astronomy, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Kondo, Sohei [Koyama Astronomical Observatory, Kyoto-Sangyo University, Motoyama, Kamigamo, Kita-Ku, Kyoto 603-8555 (Japan); Tsujimoto, Takuji [National Astronomical Observatory of Japan and Department of Astronomical Science, Graduate University for Advanced Studies, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Okoshi, Katsuya [Faculty of Industrial Science and Technology, Tokyo University of Science, 102-1 Tomino, Oshamanbe, Hokkaido 049-3514 (Japan); Shigeyama, Toshikazu, E-mail: hamano@ioa.s.u-tokyo.ac.jp [Research Center for the Early Universe, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan)

    2012-08-01

    Using the Subaru 8.2 m Telescope with the IRCS Echelle spectrograph, we obtained high-resolution (R = 10,000) near-infrared (1.01-1.38 {mu}m) spectra of images A and B of the gravitationally lensed QSO B1422+231 (z = 3.628) consisting of four known lensed images. We detected Mg II absorption lines at z = 3.54, which show a large variance of column densities ({approx}0.3 dex) and velocities ({approx}10 km s{sup -1}) between sightlines A and B with a projected separation of only 8.4h{sup -1}{sub 70} pc at that redshift. This is the smallest spatial structure of the high-z gas clouds ever detected after Rauch et al. found a 20 pc scale structure for the same z = 3.54 absorption system using optical spectra of images A and C. The observed systematic variances imply that the system is an expanding shell as originally suggested by Rauch et al. By combining the data for three sightlines, we managed to constrain the radius and expansion velocity of the shell ({approx}50-100 pc, 130 km s{sup -1}), concluding that the shell is truly a supernova remnant (SNR) rather than other types of shell objects, such as a giant H II region. We also detected strong Fe II absorption lines for this system, but with much broader Doppler width than that of {alpha}-element lines. We suggest that this Fe II absorption line originates in a localized Fe II-rich gas cloud that is not completely mixed with plowed ambient interstellar gas clouds showing other {alpha}-element low-ion absorption lines. Along with the Fe richness, we conclude that the SNR is produced by an SN Ia explosion.

  13. Coordinating Garbage Collection for Arrays of Solid-state Drives

    SciTech Connect (OSTI)

    Kim, Youngjae; Lee, Junghee; Oral, H Sarp; Dillow, David A; Wang, Feiyi; Shipman, Galen M

    2014-01-01

    Although solid-state drives (SSDs) offer significant performance improvements over hard disk drives (HDDs) for a number of workloads, they can exhibit substantial variance in request latency and throughput as a result of garbage collection (GC). When GC conflicts with an I/O stream, the stream can make no forward progress until the GC cycle completes. GC cycles are scheduled by logic internal to the SSD based on several factors such as the pattern, frequency, and volume of write requests. When SSDs are used in a RAID with currently available technology, the lack of coordination of the SSD-local GC cycles amplifies this performance variance. We propose a global garbage collection (GGC) mechanism to improve response times and reduce performance variability for a RAID of SSDs. We include a high-level design of SSD-aware RAID controller and GGC-capable SSD devices and algorithms to coordinate the GGC cycles. We develop reactive and proactive GC coordination algorithms and evaluate their I/O performance and block erase counts for various workloads. Our simulations show that GC coordination by a reactive scheme improves average response time and reduces performance variability for a wide variety of enterprise workloads. For bursty, write-dominated workloads, response time was improved by 69% and performance variability was reduced by 71%. We show that a proactive GC coordination algorithm can further improve the I/O response times by up to 9% and the performance variability by up to 15%. We also observe that it could increase the lifetimes of SSDs with some workloads (e.g. Financial) by reducing the number of block erase counts by up to 79% relative to a reactive algorithm for write-dominant enterprise workloads.

  14. A stochastic extension of the explicit algebraic subgrid-scale models

    SciTech Connect (OSTI)

    Rasam, A. Brethouwer, G.; Johansson, A. V.

    2014-05-15

    The explicit algebraic subgrid-scale (SGS) stress model (EASM) of Marstorp et al. [Explicit algebraic subgrid stress models with application to rotating channel flow, J. Fluid Mech. 639, 403432 (2009)] and explicit algebraic SGS scalar flux model (EASFM) of Rasam et al. [An explicit algebraic model for the subgrid-scale passive scalar flux, J. Fluid Mech. 721, 541577 (2013)] are extended with stochastic terms based on the Langevin equation formalism for the subgrid-scales by Marstorp et al. [A stochastic subgrid model with application to turbulent flow and scalar mixing, Phys. Fluids 19, 035107 (2007)]. The EASM and EASFM are nonlinear mixed and tensor eddy-diffusivity models, which improve large eddy simulation (LES) predictions of the mean flow, Reynolds stresses, and scalar fluxes of wall-bounded flows compared to isotropic eddy-viscosity and eddy-diffusivity SGS models, especially at coarse resolutions. The purpose of the stochastic extension of the explicit algebraic SGS models is to further improve the characteristics of the kinetic energy and scalar variance SGS dissipation, which are key quantities that govern the small-scale mixing and dispersion dynamics. LES of turbulent channel flow with passive scalar transport shows that the stochastic terms enhance SGS dissipation statistics such as length scale, variance, and probability density functions and introduce a significant amount of backscatter of energy from the subgrid to the resolved scales without causing numerical stability problems. The improvements in the SGS dissipation predictions in turn enhances the predicted resolved statistics such as the mean scalar, scalar fluxes, Reynolds stresses, and correlation lengths. Moreover, the nonalignment between the SGS stress and resolved strain-rate tensors predicted by the EASM with stochastic extension is in much closer agreement with direct numerical simulation data.

  15. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    SciTech Connect (OSTI)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael; Ramuhalli, Pradeep; Jeffries, Brien; Nam, Alan; Strong, Eric; Tong, Matthew; Welz, Zachary; Barbieri, Federico; Langford, Seth; Meinweiser, Gregory; Weeks, Matthew

    2014-11-06

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning of component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL es

  16. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect (OSTI)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.

  17. Observations of the scale-dependent turbulence and evaluation of the flux–gradient relationship for sensible heat for a closed Douglas-fir canopy in very weak wind conditions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vickers, D.; Thomas, C. K.

    2014-09-16

    Observations of the scale-dependent turbulent fluxes, variances, and the bulk transfer parameterization for sensible heat above, within, and beneath a tall closed Douglas-fir canopy in very weak winds are examined. The daytime sub-canopy vertical velocity spectra exhibit a double-peak structure with peaks at timescales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime sub-canopy heat flux co-spectra. The daytime momentum flux co-spectra in the upper bole space and in the sub-canopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of amore »mean wind direction, and subsequent partitioning of the momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the sub-canopy contribute to upward transfer of momentum, consistent with the observed sub-canopy secondary wind speed maximum. For the smallest resolved scales in the canopy at nighttime, we find increasing vertical velocity variance with decreasing timescale, consistent with very small eddies possibly generated by wake shedding from the canopy elements that transport momentum, but not heat. Unusually large values of the velocity aspect ratio within the canopy were observed, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the very dense canopy. The flux–gradient approach for sensible heat flux is found to be valid for the sub-canopy and above-canopy layers when considered separately in spite of the very small fluxes on the order of a few W m−2 in the sub-canopy. However, single-source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the sub-canopy and above-canopy layers. While sub-canopy Stanton numbers agreed well with values typically reported in the literature, our estimates for the above-canopy Stanton number were much larger, which likely leads to underestimated modeled sensible heat fluxes above dark warm closed canopies.« less

  18. Combining weak-lensing tomography and spectroscopic redshift surveys

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Cai, Yan -Chuan; Bernstein, Gary

    2012-05-11

    Redshift space distortion (RSD) is a powerful way of measuring the growth of structure and testing General Relativity, but it is limited by cosmic variance and the degeneracy between galaxy bias b and the growth rate factor f. The cross-correlation of lensing shear with the galaxy density field can in principle measure b in a manner free from cosmic variance limits, breaking the f-b degeneracy and allowing inference of the matter power spectrum from the galaxy survey. We analyze the growth constraints from a realistic tomographic weak lensing photo-z survey combined with a spectroscopic galaxy redshift survey over the samemore » sky area. For sky coverage fsky = 0.5, analysis of the transverse modes measures b to 2-3% accuracy per Δz = 0.1 bin at z < 1 when ~10 galaxies arcmin–2 are measured in the lensing survey and all halos with M > Mmin = 1013h–1M⊙ have spectra. For the gravitational growth parameter parameter γ (f = Ωγm), combining the lensing information with RSD analysis of non-transverse modes yields accuracy σ(γ) ≈ 0.01. Adding lensing information to the RSD survey improves \\sigma(\\gamma) by an amount equivalent to a 3x (10x) increase in RSD survey area when the spectroscopic survey extends down to halo mass 1013.5 (1014) h–1 M⊙. We also find that the σ(γ) of overlapping surveys is equivalent to that of surveys 1.5-2 times larger if they are separated on the sky. This gain is greatest when the spectroscopic mass threshold is 1013 -1014 h–1 M⊙, similar to LRG surveys. The gain of overlapping surveys is reduced for very deep or very shallow spectroscopic surveys, but any practical surveys are more powerful when overlapped than when separated. As a result, the gain of overlapped surveys is larger in the case when the primordial power spectrum normalization is uncertain by > 0.5%.« less

  19. Groundwater Monitoring Plan for the Hanford Site 216-B-3 Pond RCRA Facility

    SciTech Connect (OSTI)

    Barnett, D BRENT.; Smith, Ronald M.; Chou, Charissa J.; McDonald, John P.

    2005-11-01

    The 216-B-3 Pond system was a series of ponds used for disposal of liquid effluent from past Hanford production facilities. In operation from 1945 to 1997, the B Pond System has been a Resource Conservation and Recovery Act (RCRA) facility since 1986, with RCRA interim-status groundwater monitoring in place since 1988. In 1994 the expansion ponds of the facility were clean closed, leaving only the main pond and a portion of the 216-B-3-3 ditch as the currently regulated facility. In 2001, the Washington State Department of Ecology (Ecology) issued a letter providing guidance for a two-year, trial evaluation of an alternate, intrawell statistical approach to contaminant detection monitoring at the B Pond system. This temporary variance was allowed because the standard indicator-parameters evaluation (pH, specific conductance, total organic carbon, and total organic halides) and accompanying interim status statistical approach is ineffective for detecting potential B-Pond-derived contaminants in groundwater, primarily because this method fails to account for variability in the background data and because B Pond leachate is not expected to affect the indicator parameters. In July 2003, the final samples were collected for the two-year variance period. An evaluation of the results of the alternate statistical approach is currently in progress. While Ecology evaluates the efficacy of the alternate approach (and/or until B Pond is incorporated into the Hanford Facility RCRA Permit), the B Pond system will return to contamination-indicator detection monitoring. Total organic carbon and total organic halides were added to the constituent list beginning with the January 2004 samples. Under this plan, the following wells will be monitored for B Pond: 699-42-42B, 699-43-44, 699-43-45, and 699-44-39B. The wells will be sampled semi-annually for the contamination indicator parameters (pH, specific conductance, total organic carbon, and total organic halides) and annually for water quality parameters (chloride, iron, manganese, phenols, sodium, and sulfate). This plan will remain in effect until superseded by another plan or until B Pond is incorporated into the Hanford Facility RCRA Permit.

  20. What do correlations tell us about anthropogenicbiogenic interactions and SOA formation in the Sacramento Plume during CARES?

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kleinman, L.; Kuang, C.; Sedlacek, A.; Senum, G.; Springston, S.; Wang, J.; Zhang, Q.; Jayne, J.; Fast, J.; Hubbe, J.; et al

    2015-09-17

    During the Carbonaceous Aerosols and Radiative Effects Study (CARES) the DOE G-1 aircraft was used to sample aerosol and gas phase compounds in the Sacramento, CA plume and surrounding region. We present data from 66 plume transects obtained during 13 flights in which southwesterly winds transported the plume towards the foothills of the Sierra Nevada Mountains. Plume transport occurred partly over land with high isoprene emission rates. Our objective is to empirically determine whether organic aerosol (OA) can be attributed to anthropogenic or biogenic sources, and to determine whether there is a synergistic effect whereby OA concentrations are enhanced bymorethe simultaneous presence of high concentrations of CO and either isoprene, MVK+MACR (sum of methyl vinyl ketone and methacrolein) or methanol, which are taken as tracers of anthropogenic and biogenic emissions, respectively. Linear and bilinear correlations between OA, CO, and each of three biogenic tracers, "Bio", for individual plume transects indicate that most of the variance in OA over short time and distance scales can be explained by CO. For each transect and species a plume perturbation, (i.e., ?OA, defined as the difference between 90th and 10th percentiles) was defined and regressions done amongst ? values in order to probe day to day and location dependent variability. Species that predicted the largest fraction of the variance in ?OA were ?O3 and ?CO. Background OA was highly correlated with background methanol and poorly correlated with other tracers. Because background OA was ~ 60 % of peak OA in the urban plume, peak OA should be primarily biogenic and therefore non-fossil. Transects were split into subsets according to the percentile rankings of ?CO and ?Bio, similar to an approach used by Setyan et al. (2012) and Shilling et al. (2013) to determine if anthropogenic-biogenic interactions enhance OA production. As found earlier, ?OA in the data subset having high ?CO and high ?Bio was several-fold greater than in other subsets. Part of this difference is consistent with a synergistic interaction between anthropogenic and biogenic precursors and part to an independent linear dependence of ?OA on precursors. Highest values of ?O3 also occur in the high ?COhigh ?Bio data set, raising the possibility that the coincidence of high concentrations of anthropogenic and biogenic tracers as well as OA and O3 may be associated with high temperatures, clear skies, and poor ventilation in addition to specific interaction between anthropogenic and biogenic compounds.less

  1. URBAN WOOD/COAL CO-FIRING IN THE BELLEFIELD BOILERPLANT

    SciTech Connect (OSTI)

    James T. Cobb Jr.; Gene E. Geiger; William W. Elder III; William P. Barry; Jun Wang; Hongming Li

    2004-04-08

    An Environmental Questionnaire for the demonstration at the Bellefield Boiler Plant (BBP) was submitted to the national Energy Technology Laboratory. An R&D variance for the air permit at the BBP was sought from the Allegheny County Health Department (ACHD). R&D variances for the solid waste permits at the J. A. Rutter Company (JARC), and Emery Tree Service (ETS) were sought from the Pennsylvania Department of Environmental Protection (PADEP). Construction wood was acquired from Thompson Properties and Seven D Corporation. Verbal authorizations were received in all cases. Memoranda of understanding were executed by the University of Pittsburgh with BBP, JARC and ETS. Construction wood was collected from Thompson Properties and from Seven D Corporation. Forty tons of pallet and construction wood were ground to produce BioGrind Wood Chips at JARC and delivered to Mon Valley Transportation Company (MVTC). Five tons of construction wood were hammer milled at ETS and half of the product delivered to MVTC. Blends of wood and coal, produced at MVTC by staff of JARC and MVTC, were shipped by rail to BBP. The experimental portion of the project was carried out at BBP in late March and early April 2001. Several preliminary tests were successfully conducted using blends of 20% and 33% wood by volume. Four one-day tests using a blend of 40% wood by volume were then carried out. Problems of feeding and slagging were experienced with the 40% blend. Light-colored fly ash was observed coming from the stack during all four tests. Emissions of SO{sub 2}, NOx and total particulates, measured by Energy Systems Associates, decreased when compared with combusting coal alone. A procedure for calculating material and energy balances on BBP's Boiler No.1 was developed, using the results of an earlier compliance test at the plant. Material and energy balances were then calculated for the four test periods. Boiler efficiency was found to decrease slightly when the fuel was shifted from coal to the 40% blend. Neither commercial production of sized urban waste wood for the energy market in Pittsburgh nor commercial cofiring of wood/coal blends at BBP are anticipated in the near future.

  2. Consistent quantification of climate impacts due to biogenic carbon storage across a range of bio-product systems

    SciTech Connect (OSTI)

    Guest, Geoffrey Bright, Ryan M. Cherubini, Francesco Strmman, Anders H.

    2013-11-15

    Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxes in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily ? 1 CO{sub 2}eq per kg CO{sub 2} stored. As an example, when biogenic CO{sub 2} from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of ? 0.56 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of resource and carbon storage options considered indicates that more accurate accounting will require case-specific factors derived following the methodological guidelines provided in this and recent manuscripts. -- Highlights: Climate impacts of stored biogenic carbon (bio-C) are consistently quantified. Temporary storage of bio-C does not always equate to a climate cooling impact. 1 unit of bio-C stored over a time horizon does not always equate to ? 1 unit CO{sub 2}eq. Discrepancies of climate change impact quantification in literature are clarified.

  3. Effect of uncertain hydraulic conductivity on the fate and transport of BTEX compounds at a field site

    SciTech Connect (OSTI)

    Lu, Guoping; Zheng, Chunmiao; Wolfsberg, Andrew

    2002-01-05

    A Monte Carlo analysis was conducted to investigate the effect of uncertain hydraulic conductivity on the fate and transport of BTEX compounds (benzene, toluene, ethyl benzene, and xylene) at a field site on Hill Air Force Base, Utah. Microbially mediated BTEX degradation has occurred at the site through multiple terminal electron-accepting processes, including aerobic respiration, denitrification, Fe(III) reduction, sulfate reduction, and methanogenesis degradation. Multiple realizations of the hydraulic conductivity field were generated and substituted into a multispecies reactive transport model developed and calibrated for the Hill AFB site in a previous study. Simulation results show that the calculated total BTEX masses (released from a constant-concentration source) that remain in the aquifer at the end of the simulation period statistically follow a lognormal distribution. In the first analysis (base case), the calculated total BTEX mass varies from a minimum of 12% less and a maximum of 60% more than that of the previously calibrated model. This suggests that the uncertainty in hydraulic conductivity can lead to significant uncertainties in modeling the fate and transport of BTEX. Geometric analyses of calculated plume configurations show that a higher BTEX mass is associated with wider lateral spreading, while a lower mass is associated with longer longitudinal extension. More BTEX mass in the aquifer causes either a large depletion of dissolved oxygen (DO) and NO{sub 3}{sup -}, or a large depletion of DO and a large production of Fe{sup 2+}, with moderately depleted NO{sub 3}{sup -}. In an additional analysis, the effect of varying degrees of aquifer heterogeneity and associated uncertainty is examined by considering hydraulic conductivity with different variances and correlation lengths. An increase in variance leads to a higher average BTEX mass in the aquifer, while an increase in correlation length results in a lower average. This observation is explained by relevant partitioning of BTEX into the aquifer from the LNAPL source. Although these findings may only be applicable to the field conditions considered in this study, the methodology used and insights gained are of general interest and relevance to other fuel-hydrocarbon natural-attenuation sites.

  4. Strain-dependent Damage in Mouse Lung After Carbon Ion Irradiation

    SciTech Connect (OSTI)

    Moritake, Takashi; Proton Medical Research Center, University of Tsukuba, Tsukuba ; Fujita, Hidetoshi; Yanagisawa, Mitsuru; Nakawatari, Miyako; Imadome, Kaori; Nakamura, Etsuko; Iwakawa, Mayumi; Imai, Takashi

    2012-09-01

    Purpose: To examine whether inherent factors produce differences in lung morbidity in response to carbon ion (C-ion) irradiation, and to identify the molecules that have a key role in strain-dependent adverse effects in the lung. Methods and Materials: Three strains of female mice (C3H/He Slc, C57BL/6J Jms Slc, and A/J Jms Slc) were locally irradiated in the thorax with either C-ion beams (290 MeV/n, in 6 cm spread-out Bragg peak) or with {sup 137}Cs {gamma}-rays as a reference beam. We performed survival assays and histologic examination of the lung with hematoxylin-eosin and Masson's trichrome staining. In addition, we performed immunohistochemical staining for hyaluronic acid (HA), CD44, and Mac3 and assayed for gene expression. Results: The survival data in mice showed a between-strain variance after C-ion irradiation with 10 Gy. The median survival time of C3H/He was significantly shortened after C-ion irradiation at the higher dose of 12.5 Gy. Histologic examination revealed early-phase hemorrhagic pneumonitis in C3H/He and late-phase focal fibrotic lesions in C57BL/6J after C-ion irradiation with 10 Gy. Pleural effusion was apparent in C57BL/6J and A/J mice, 168 days after C-ion irradiation with 10 Gy. Microarray analysis of irradiated lung tissue in the three mouse strains identified differential expression changes in growth differentiation factor 15 (Gdf15), which regulates macrophage function, and hyaluronan synthase 1 (Has1), which plays a role in HA metabolism. Immunohistochemistry showed that the number of CD44-positive cells, a surrogate marker for HA accumulation, and Mac3-positive cells, a marker for macrophage infiltration in irradiated lung, varied significantly among the three mouse strains during the early phase. Conclusions: This study demonstrated a strain-dependent differential response in mice to C-ion thoracic irradiation. Our findings identified candidate molecules that could be implicated in the between-strain variance to early hemorrhagic pneumonitis after C-ion irradiation.

  5. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    SciTech Connect (OSTI)

    Sisniega, A; Zbijewski, W; Stayman, J; Yorkston, J; Aygun, N; Koliatsos, V; Siewerdsen, J

    2014-06-15

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced for additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.

  6. What Is the Largest Einstein Radius in the Universe?

    SciTech Connect (OSTI)

    Oguri, Masamune; Blandford, Roger D.

    2008-08-05

    The Einstein radius plays a central role in lens studies as it characterizes the strength of gravitational lensing. In particular, the distribution of Einstein radii near the upper cutoff should probe the probability distribution of the largest mass concentrations in the universe. Adopting a triaxial halo model, we compute expected distributions of large Einstein radii. To assess the cosmic variance, we generate a number of Monte-Carlo realizations of all-sky catalogues of massive clusters. We find that the expected largest Einstein radius in the universe is sensitive to parameters characterizing the cosmological model, especially {sigma}{sub s}: for a source redshift of unity, they are 42{sub -7}{sup +9}, 35{sub -6}{sup +8}, and 54{sub -7}{sup +12} arcseconds (errors denote 1{sigma} cosmic variance), assuming best-fit cosmological parameters of the Wilkinson Microwave Anisotropy Probe five-year (WMAP5), three-year (WMAP3) and one-year (WMAP1) data, respectively. These values are broadly consistent with current observations given their incompleteness. The mass of the largest lens cluster can be as small as {approx} 10{sup 15} M{sub {circle_dot}}. For the same source redshift, we expect in all-sky {approx} 35 (WMAP5), {approx} 15 (WMAP3), and {approx} 150 (WMAP1) clusters that have Einstein radii larger than 2000. For a larger source redshift of 7, the largest Einstein radii grow approximately twice as large. While the values of the largest Einstein radii are almost unaffected by the level of the primordial non-Gaussianity currently of interest, the measurement of the abundance of moderately large lens clusters should probe non-Gaussianity competitively with cosmic microwave background experiments, but only if other cosmological parameters are well-measured. These semi-analytic predictions are based on a rather simple representation of clusters, and hence calibrating them with N-body simulations will help to improve the accuracy. We also find that these 'superlens' clusters constitute a highly biased population. For instance, a substantial fraction of these superlens clusters have major axes preferentially aligned with the line-of-sight. As a consequence, the projected mass distributions of the clusters are rounder by an ellipticity of {approx} 0.2 and have {approx} 40%-60% larger concentrations compared with typical clusters with similar redshifts and masses. We argue that the large concentration measured in A1689 is consistent with our model prediction at the 1.2{sigma} level. A combined analysis of several clusters will be needed to see whether or not the observed concentrations conflict with predictions of the at {Lambda}-dominated cold dark matter model.

  7. Quantifying the Uncertainties of Aerosol Indirect Effects and Impacts on Decadal-Scale Climate Variability in NCAR CAM5 and CESM1

    SciTech Connect (OSTI)

    Park, Sungsu

    2014-12-12

    The main goal of this project is to systematically quantify the major uncertainties of aerosol indirect effects due to the treatment of moist turbulent processes that drive aerosol activation, cloud macrophysics and microphysics in response to anthropogenic aerosol perturbations using the CAM5/CESM1. To achieve this goal, the P.I. hired a postdoctoral research scientist (Dr. Anna Fitch) who started her work from the Nov.1st.2012. In order to achieve the project goal, the first task that the Postdoc. and the P.I. did was to quantify the role of subgrid vertical velocity variance on the activation and nucleation of cloud liquid droplets and ice crystals and its impact on the aerosol indirect effect in CAM5. First, we analyzed various LES cases (from dry stable to cloud-topped PBL) to check whether this isotropic turbulence assumption used in CAM5 is really valid. It turned out that this isotropic turbulence assumption is not universally valid. Consequently, from the analysis of LES, we derived an empirical formulation relaxing the isotropic turbulence assumption used for the CAM5 aerosol activation and ice nucleation, and implemented the empirical formulation into CAM5/CESM1, and tested in the single-column and global simulation modes, and examined how it changed aerosol indirect effects in the CAM5/CESM1. These results were reported in the poster section in the 18th Annual CESM workshop held in Breckenridge, CO during Jun.17-20.2013. While we derived an empirical formulation from the analysis of couple of LES from the first task, the general applicability of that empirical formulation was questionable, because it was obtained from the limited number of LES simulations. The second task we did was to derive a more fundamental analytical formulation relating vertical velocity variance to TKE using other information starting from basic physical principles. This was a somewhat challenging subject, but if this could be done in a successful way, it could be directly implemented into the CAM5 as a practical parameterization, and substantially contributes to achieving the project goal. Through an intensive research for about one year, we found appropriate mathematical formulation and tried to implement it into the CAM5 PBL and activation routine as a practical parameterized numerical code. During these processes, however, the Postdoc applied for another position in Sweden, Europe, and accepted a job offer there, and left NCAR in August 2014. In Sweden, Dr. Anna Fitch is still working on this subject in a part time, planning to finalize the research and to write the paper in a near future.

  8. Atmospheric Radiation Measurement program climate research facility operations quarterly report October 1 - December 31, 2008.

    SciTech Connect (OSTI)

    Sisterson, D. L.

    2009-01-15

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, they calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The US Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1-(ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208), and for the Tropical Western Pacific (TWP) locale is 1,876.80 hours (0.85 x 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because the data have not yet been released from China to the DMF for processing. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2008, for the fixed sites. The AMF has been deployed to China, but the data have not yet been released. The first quarter comprises a total of 2,208 hours. The average exceeded their goal this quarter.

  9. EXPECTED LARGE SYNOPTIC SURVEY TELESCOPE (LSST) YIELD OF ECLIPSING BINARY STARS

    SciTech Connect (OSTI)

    Prsa, Andrej; Pepper, Joshua; Stassun, Keivan G.

    2011-08-15

    In this paper, we estimate the Large Synoptic Survey Telescope (LSST) yield of eclipsing binary stars, which will survey {approx}20,000 deg{sup 2} of the southern sky during a period of 10 years in six photometric passbands to r {approx} 24.5. We generate a set of 10,000 eclipsing binary light curves sampled to the LSST time cadence across the whole sky, with added noise as a function of apparent magnitude. This set is passed to the analysis-of-variance period finder to assess the recoverability rate for the periods, and the successfully phased light curves are passed to the artificial-intelligence-based pipeline ebai to assess the recoverability rate in terms of the eclipsing binaries' physical and geometric parameters. We find that, out of {approx}24 million eclipsing binaries observed by LSST with a signal-to-noise ratio >10 in mission lifetime, {approx}28% or 6.7 million can be fully characterized by the pipeline. Of those, {approx}25% or 1.7 million will be double-lined binaries, a true treasure trove for stellar astrophysics.

  10. DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

    SciTech Connect (OSTI)

    Griffin, Joshua D.; Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  11. Technologies for Production of Heat and Electricity

    SciTech Connect (OSTI)

    Jacob J. Jacobson; Kara G. Cafferty

    2014-04-01

    Biomass is a desirable source of energy because it is renewable, sustainable, widely available throughout the world, and amenable to conversion. Biomass is composed of cellulose, hemicellulose, and lignin components. Cellulose is generally the dominant fraction, representing about 40 to 50% of the material by weight, with hemicellulose representing 20 to 50% of the material, and lignin making up the remaining portion [4,5,6]. Although the outward appearance of the various forms of cellulosic biomass, such as wood, grass, municipal solid waste (MSW), or agricultural residues, is different, all of these materials have a similar cellulosic composition. Elementally, however, biomass varies considerably, thereby presenting technical challenges at virtually every phase of its conversion to useful energy forms and products. Despite the variances among cellulosic sources, there are a variety of technologies for converting biomass into energy. These technologies are generally divided into two groups: biochemical (biological-based) and thermochemical (heat-based) conversion processes. This chapter reviews the specific technologies that can be used to convert biomass to energy. Each technology review includes the description of the process, and the positive and negative aspects.

  12. Single-entry Longwall study. Volume I: report. Final report, May 1982. [195 references; single vs multiple entry

    SciTech Connect (OSTI)

    Not Available

    1980-05-01

    This study is an effort to determine legal and technical constraints on the introduction of single entry longwall systems to US coal mining. US mandatory standards governing underground mining are compared and contrasted with regulations of certain foreign countries, mainly continental Europe, relating to the employment of longwall mining. Particular attention is paid to the planning and development of entries, the mining of longwall panels and consequent retrieval operations. Sequential mining of adjacent longwall panels is considered. Particular legal requirements, which constrain or prohibit single entry longwall mining in the US, are identified, and certain variances or exemptions from the regulations are described. The costs of single entry systems and of currently employed multiple entry systems are compared. Under prevailing US conditions multiple entry longwall is preferable because of safety, marginal economic benefit and compliance with US laws and regulations. However, where physical conditions become hazardous for the multiple entry method, for instance, in greater depth or in rockburst prone ground, mandatory standards, which now constrain or prohibit single entry workings, are of doubtful benefit. European methods would then provide single entry operation with improved strata control.

  13. Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gao, Weimin; Francis, Arokiasamy J.

    2013-01-01

    Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction bymore » clostridia demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less

  14. Analysis of Strand-Specific RNA-Seq Data Using Machine Learning Reveals the Structures of Transcription Units in Clostridium thermocellum

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chou, Wen-Chi; Ma, Qin; Yang, Shihui; Cao, Sha; Klingeman, Dawn M.; Brown, Steven D.; Xu, Ying

    2015-03-12

    The identification of transcription units (TUs) encoded in a bacterial genome is essential to elucidation of transcriptional regulation of the organism. To gain a detailed understanding of the dynamically composed TU structures, we have used four strand-specific RNA-seq (ssRNA-seq) datasets collected under two experimental conditions to derive the genomic TU organization of Clostridium thermocellum using a machine-learning approach. Our method accurately predicted the genomic boundaries of individual TUs based on two sets of parameters measuring the RNA-seq expression patterns across the genome: expression-level continuity and variance. A total of 2590 distinct TUs are predicted based on the four RNA-seq datasets.more » Moreover, among the predicted TUs, 44% have multiple genes. We assessed our prediction method on an independent set of RNA-seq data with longer reads. The evaluation confirmed the high quality of the predicted TUs. Functional enrichment analyses on a selected subset of the predicted TUs revealed interesting biology. To demonstrate the generality of the prediction method, we have also applied the method to RNA-seq data collected on Escherichia coli and achieved high prediction accuracies. The TU prediction program named SeqTU is publicly available athttps://code.google.com/p/seqtu/. We expect that the predicted TUs can serve as the baseline information for studying transcriptional and post-transcriptional regulation in C. thermocellum and other bacteria.« less

  15. The development and testing of technologies for the remediation of mercury-contaminated soils, Task 7.52. Topical report, December 1992--December 1993

    SciTech Connect (OSTI)

    Stepan, D.J.; Fraley, R.H.; Charlton, D.S.

    1994-02-01

    The release of elemental mercury into the environment from manometers that are used in the measurement of natural gas flow through pipelines has created a potentially serious problem for the gas industry. Regulations, particularly the Land Disposal Restrictions (LDR), have had a major impact on gas companies dealing with mercury-contaminated soils. After the May 8, 1993, LDR deadline extension, gas companies were required to treat mercury-contaminated soils by designated methods to specified levels prior to disposal in landfills. In addition, gas companies must comply with various state regulations that are often more stringent than the LDR. The gas industry is concerned that the LDRs do not allow enough viable options for dealing with their mercury-related problems. The US Environmental Protection Agency has specified the Best Demonstrated Available Technology (BDAT) as thermal roasting or retorting. However, the Agency recognizes that treatment of certain wastes to the LDR standards may not always be achievable and that the BDAT used to set the standard may be inappropriate. Therefore, a Treatability Variance Process for remedial actions was established (40 Code of Federal Regulations 268.44) for the evaluation of alternative remedial technologies. This report presents evaluations of demonstrations for three different remedial technologies: a pilot-scale portable thermal treatment process, a pilot-scale physical separation process in conjunction with chemical leaching, and a bench-scale chemical leaching process.

  16. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    SciTech Connect (OSTI)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  17. Developing an Integrated Model Framework for the Assessment of Sustainable Agricultural Residue Removal Limits for Bioenergy Systems

    SciTech Connect (OSTI)

    David Muth, Jr.; Jared Abodeely; Richard Nelson; Douglas McCorkle; Joshua Koch; Kenneth Bryden

    2011-08-01

    Agricultural residues have significant potential as a feedstock for bioenergy production, but removing these residues can have negative impacts on soil health. Models and datasets that can support decisions about sustainable agricultural residue removal are available; however, no tools currently exist capable of simultaneously addressing all environmental factors that can limit availability of residue. The VE-Suite model integration framework has been used to couple a set of environmental process models to support agricultural residue removal decisions. The RUSLE2, WEPS, and Soil Conditioning Index models have been integrated. A disparate set of databases providing the soils, climate, and management practice data required to run these models have also been integrated. The integrated system has been demonstrated for two example cases. First, an assessment using high spatial fidelity crop yield data has been run for a single farm. This analysis shows the significant variance in sustainably accessible residue across a single farm and crop year. A second example is an aggregate assessment of agricultural residues available in the state of Iowa. This implementation of the integrated systems model demonstrates the capability to run a vast range of scenarios required to represent a large geographic region.

  18. Tracking stochastic resonance curves using an assisted reference model

    SciTech Connect (OSTI)

    Caldern Ramrez, Mario; Rico Martnez, Ramiro; Parmananda, P.

    2015-06-15

    The optimal noise amplitude for Stochastic Resonance (SR) is located employing an Artificial Neural Network (ANN) reference model with a nonlinear predictive capability. A modified Kalman Filter (KF) was coupled to this reference model in order to compensate for semi-quantitative forecast errors. Three manifestations of stochastic resonance, namely, Periodic Stochastic Resonance (PSR), Aperiodic Stochastic Resonance (ASR), and finally Coherence Resonance (CR) were considered. Using noise amplitude as the control parameter, for the case of PSR and ASR, the cross-correlation curve between the sub-threshold input signal and the system response is tracked. However, using the same parameter the Normalized Variance curve is tracked for the case of CR. The goal of the present work is to track these curves and converge to their respective extremal points. The ANN reference model strategy captures and subsequently predicts the nonlinear features of the model system while the KF compensates for the perturbations inherent to the superimposed noise. This technique, implemented in the FitzHugh-Nagumo model, enabled us to track the resonance curves and eventually locate their optimal (extremal) values. This would yield the optimal value of noise for the three manifestations of the SR phenomena.

  19. Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels

    SciTech Connect (OSTI)

    Monras, Alex; Illuminati, Fabrizio

    2011-01-15

    We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form of compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.

  20. MPACT Fast Neutron Multiplicity System Prototype Development

    SciTech Connect (OSTI)

    D.L. Chichester; S.A. Pozzi; J.L. Dolan; M.T. Kinlaw; S.J. Thompson; A.C. Kaplan; M. Flaska; A. Enqvist; J.T. Johnson; S.M. Watson

    2013-09-01

    This document serves as both an FY2103 End-of-Year and End-of-Project report on efforts that resulted in the design of a prototype fast neutron multiplicity counter leveraged upon the findings of previous project efforts. The prototype design includes 32 liquid scintillator detectors with cubic volumes 7.62 cm in dimension configured into 4 stacked rings of 8 detectors. Detector signal collection for the system is handled with a pair of Struck Innovative Systeme 16-channel digitizers controlled by in-house developed software with built-in multiplicity analysis algorithms. Initial testing and familiarization of the currently obtained prototype components is underway, however full prototype construction is required for further optimization. Monte Carlo models of the prototype system were performed to estimate die-away and efficiency values. Analysis of these models resulted in the development of a software package capable of determining the effects of nearest-neighbor rejection methods for elimination of detector cross talk. A parameter study was performed using previously developed analytical methods for the estimation of assay mass variance for use as a figure-of-merit for system performance. A software package was developed to automate these calculations and ensure accuracy. The results of the parameter study show that the prototype fast neutron multiplicity counter design is very nearly optimized under the restraints of the parameter space.

  1. Experimental and Monte Carlo evaluation of Eclipse treatment planning system for effects on dose distribution of the hip prostheses

    SciTech Connect (OSTI)

    atl?, Serap; Tan?r, Gne?

    2013-10-01

    The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.

  2. Straight and chopped dc performance data for a General Electric 5BT 2366C10 motor and an EV-1 controller. Final report

    SciTech Connect (OSTI)

    Edie, P.C.

    1981-01-01

    This report is intended to supply the electric vehicle manufacturer with performance data on the General Electric 5BT 2366C10 series wound dc motor and EV-1 chopper controller. Data are provided for both straight and chopped dc input to the motor, at 2 motor temperature levels. Testing was done at 6 voltage increments to the motor, and 2 voltage increments to the controller. Data results are presented in both tabular and graphical forms. Tabular information includes motor voltage and current input data, motor speed and torque output data, power data and temperature data. Graphical information includes torque-speed, motor power output-speed, torque-current, and efficiency-speed plots under the various operating conditions. The data resulting from this testing shows the speed-torque plots to have the most variance with operating temperature. The maximum motor efficiency is between 86% and 87%, regardless of temperature or mode of operation. When the chopper is utilized, maximum motor efficiency occurs when the chopper duty cycle approaches 100%. At low duty cycles the motor efficiency may be considerably less than the efficiency for straight dc. Chopper efficiency may be assummed to be 95% under all operating conditions. For equal speeds at a given voltage level, the motor operated in the chopped mode develops slightly more torque than it does in the straight dc mode. System block diagrams are included, along with test setup and procedure information.

  3. Localization-Delocalization Transition in a System of Quantum Kicked Rotors

    SciTech Connect (OSTI)

    Creffield, C.E.; Hur, G.; Monteiro, T.S. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)

    2006-01-20

    The quantum dynamics of atoms subjected to pairs of closely spaced {delta} kicks from optical potentials are shown to be quite different from the well-known paradigm of quantum chaos, the single {delta}-kick system. We find the unitary matrix has a new oscillating band structure corresponding to a cellular structure of phase space and observe a spectral signature of a localization-delocalization transition from one cell to several. We find that the eigenstates have localization lengths which scale with a fractional power L{approx}({Dirac_h}/2{pi}){sup -0.75} and obtain a regime of near-linear spectral variances which approximate the 'critical statistics' relation {sigma}{sub 2}(L){approx_equal}{chi}L{approx_equal}(1/2)(1-{nu})L, where {nu}{approx_equal}0.75 is related to the fractal classical phase-space structure. The origin of the {nu}{approx_equal}0.75 exponent is analyzed.

  4. LENSING NOISE IN MILLIMETER-WAVE GALAXY CLUSTER SURVEYS

    SciTech Connect (OSTI)

    Hezaveh, Yashar; Vanderlinde, Keith; Holder, Gilbert; De Haan, Tijmen

    2013-08-01

    We study the effects of gravitational lensing by galaxy clusters of the background of dusty star-forming galaxies (DSFGs) and the cosmic microwave background (CMB), and examine the implications for Sunyaev-Zel'dovich-based (SZ) galaxy cluster surveys. At the locations of galaxy clusters, gravitational lensing modifies the probability distribution of the background flux of the DSFGs as well as the CMB. We find that, in the case of a single-frequency 150 GHz survey, lensing of DSFGs leads both to a slight increase ({approx}10%) in detected cluster number counts (due to a {approx}50% increase in the variance of the DSFG background, and hence an increased Eddington bias) and a rare (occurring in {approx}2% of clusters) 'filling-in' of SZ cluster signals by bright strongly lensed background sources. Lensing of the CMB leads to a {approx}55% reduction in CMB power at the location of massive galaxy clusters in a spatially matched single-frequency filter, leading to a net decrease in detected cluster number counts. We find that the increase in DSFG power and decrease in CMB power due to lensing at cluster locations largely cancel, such that the net effect on cluster number counts for current SZ surveys is subdominant to Poisson errors.

  5. Optimizing weak lensing mass estimates for cluster profile uncertainty

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.

    2011-09-11

    Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Dependingmore » on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less

  6. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  7. Non-parametric transformation for data correlation and integration: From theory to practice

    SciTech Connect (OSTI)

    Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon

    1997-08-01

    The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.

  8. The principal component analysis method used with polynomial Chaos expansion to propagate uncertainties through critical transport problems

    SciTech Connect (OSTI)

    Rising, M. E.; Prinja, A. K.

    2012-07-01

    A critical neutron transport problem with random material properties is introduced. The total cross section and the average neutron multiplicity are assumed to be uncertain, characterized by the mean and variance with a log-normal distribution. The average neutron multiplicity and the total cross section are assumed to be uncorrected and the material properties for differing materials are also assumed to be uncorrected. The principal component analysis method is used to decompose the covariance matrix into eigenvalues and eigenvectors and then 'realizations' of the material properties can be computed. A simple Monte Carlo brute force sampling of the decomposed covariance matrix is employed to obtain a benchmark result for each test problem. In order to save computational time and to characterize the moments and probability density function of the multiplication factor the polynomial chaos expansion method is employed along with the stochastic collocation method. A Gauss-Hermite quadrature set is convolved into a multidimensional tensor product quadrature set and is successfully used to compute the polynomial chaos expansion coefficients of the multiplication factor. Finally, for a particular critical fuel pin assembly the appropriate number of random variables and polynomial expansion order are investigated. (authors)

  9. MM-Estimator and Adjusted Super Smoother based Simultaneous Prediction Confedenc

    Energy Science and Technology Software Center (OSTI)

    2002-07-19

    A Novel Application of Regression Analysis (MM-Estimator) with Simultaneous Prediction Confidence Intervals are proposed to detect up- or down-regulated genes, which are outliers in scatter plots based on log-transformed red (Cy5 fluorescent dye) versus green (Cy3 fluorescent Dye) intensities. Advantages of the application: 1) Robust and Resistant MM-Estimator is a Reliable Method to Build Linear Regression In the presence of Outliers, 2) Exploratory Data Analysis Tools (Boxplots, Averaged Shifted Histograms, Quantile-Quantile Normal Plots and Scattermore » Plots) are Unsed to Test Visually underlying assumptions of linearity and Contaminated Normality in Microarray data), 3) Simultaneous prediction confidence intervals (SPCIs) Guarantee a desired confidence level across the whole range of the data points used for the scatter plots. Results of the outlier detection procedure is a set of significantly differentially expressed genes extracted from the employed microarray data set. A scatter plot smoother (super smoother or locally weighted regression) is used to quantify heteroscendasticity is residual variance (Commonly takes place in lower and higher intensity areas). The set of differentially expressed genes is quantified using interval estimates for P-values as a probabilistic measure of being outlier by chance. Monte Carlo simultations are used to adjust super smoother-based SPCIs.her.« less

  10. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

    SciTech Connect (OSTI)

    Mishra, Srikanta; Schuetter, Jared

    2014-11-01

    We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

  11. Fuel cycle cost uncertainty from nuclear fuel cycle comparison

    SciTech Connect (OSTI)

    Li, J.; McNelis, D.; Yim, M.S.

    2013-07-01

    This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for the discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.

  12. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  13. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; Overfelt, J. R.; Levy, M. N.

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  14. DAKOTA Design Analysis Kit for Optimization and Terascale

    Energy Science and Technology Software Center (OSTI)

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file andmore » launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  15. Simulation of winds as seen by a rotating vertical axis wind turbine blade

    SciTech Connect (OSTI)

    George, R.L.

    1984-02-01

    The objective of this report is to provide turbulent wind analyses relevant to the design and testing of Vertical Axis Wind Turbines (VAWT). A technique was developed for utilizing high-speed turbulence wind data from a line of seven anemometers at a single level to simulate the wind seen by a rotating VAWT blade. Twelve data cases, representing a range of wind speeds and stability classes, were selected from the large volume of data available from the Clayton, New Mexico, Vertical Plane Array (VPA) project. Simulations were run of the rotationally sampled wind speed relative to the earth, as well as the tangential and radial wind speeds, which are relative to the rotating wind turbine blade. Spectral analysis is used to compare and assess wind simulations from the different wind regimes, as well as from alternate wind measurement techniques. The variance in the wind speed at frequencies at or above the blade rotation rate is computed for all cases, and is used to quantitatively compare the VAWT simulations with Horizontal Axis Wind Turbine (HAWT) simulations. Qualitative comparisons are also made with direct wind measurements from a VAWT blade.

  16. Trace metal levels and partitioning in Wisconsin rivers: Results of background trace metals study

    SciTech Connect (OSTI)

    Shafer, M.M.; Overdier, J.T.; Armstrong, D.E.; Hurley, J.P.; Webb, D.A.

    1994-12-31

    Levels of total and filtrable Ag, Al, Cd, Cu, Pb, and Zn in 41 Wisconsin rivers draining watersheds of distinct homogeneous characteristics (land use/cover, soil type, surficial geology) were quantified. Levels, fluxes, and yields of trace metals are interpreted in terms of principal geochemical controls. The study samples were also used to evaluate the capability of modern ICP-MS techniques for ``background`` level quantification of metals. Order-of-magnitude variations in levels of a given metal between sites was measured. This large natural variance reflects influences of soil type, dissolved organic matter (DOC), ionic strength, and suspended particulate matter (SPM) on metal levels. Significant positive correlations between DOC levels and filtrable metal concentrations were observed, demonstrating the important role that DOC plays in metal speciation and behavior. Systematic, chemically consistent, differences in behavior between the metals is evident with partition coefficients (K,) and fraction in particulate forms ranking in the order: Al > Pb > Zn > Cr >Cd > Cu. Total metal yields correlate well with SPM yields, especially for highly partitioned elements, whereas filtrable metal yields reflect the interplay of partitioning and water yield. The State of Wisconsin will use these data in a re-evaluation of regulatory limits and in the development of water effects ratio criteria.

  17. Hazardous waste identification: A guide to changing regulations

    SciTech Connect (OSTI)

    Stults, R.G. )

    1993-03-01

    The Resource Conservation and Recovery Act (RCRA) was enacting in 1976 and amended in 1984 by the Hazardous and Solid Waste Amendments (HSWA). Since then, federal regulations have generated a profusion of terms to identify and describe hazardous wastes. Regulations that5 define and govern management of hazardous wastes are codified in Title 40 of the code of Federal Regulations, Protection of the environment''. Title 40 regulations are divided into chapters, subchapters and parts. To be defined as hazardous, a waste must satisfy the definition of solid waste any discharged material not specifically excluded from regulation or granted a regulatory variance by the EPA Administrator. Some wastes and other materials have been identified as non-hazardous and are listed in 40 CFR 261.4(a) and 261.4(b). Certain wastes that satisfy the definition of hazardous waste nevertheless are excluded from regulation as hazardous if they meet specific criteria. Definitions and criteria for their exclusion are found in 40 CFR 261.4(c)-(f) and 40 CFR 261.5.

  18. Dynamics of dispersive photon-number QND measurements in a micromaser

    SciTech Connect (OSTI)

    Kozlovskii, A. V. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)], E-mail: kozlovsk@sci.lebedev.ru

    2007-04-15

    A numerical analysis of dispersive quantum nondemolition measurement of the photon number of a microwave cavity field is presented. Simulations show that a key property of the dispersive atom-field interaction used in Ramsey interferometry is the extremely high sensitivity of the dynamics of atomic and field states to basic parameters of the system. When a monokinetic atomic beam is sent through a microwave cavity, a qualitative change in the field state can be caused by an uncontrollably small deviation of parameters (such as atom path length through the cavity, atom velocity, cavity mode frequency detuning, or atom-field coupling constants). The resulting cavity field can be either in a Fock state or in a super-Poissonian state (characterized by a large photon-number variance). When the atoms have a random velocity spread, the field is squeezed to a Fock state for arbitrary values of the system's parameters. However, this makes detection of Ramsey fringes impossible, because the probability of detecting an atom in the upper or lower electronic state becomes a random quantity almost uniformly distributed over the interval between zero and unity, irrespective of the cavity photon number.

  19. Assessment of the measurement control program for solution assay instruments at the Los Alamos National Laboratory Plutonium Facility

    SciTech Connect (OSTI)

    Goldman, A.S.

    1985-05-01

    This report documents and reviews the measurement control program (MCP) over a 27-month period for four solution assay instruments (SAIs) Facility. SAI measurement data collected during the period January 1982 through March 1984 were analyzed. The sources of these data included computer listings of measurements emanating from operator entries on computer terminals, logbook entries of measurements transcribed by operators, and computer listings of measurements recorded internally in the instruments. Data were also obtained from control charts that are available as part of the MCP. As a result of our analyses we observed agreement between propagated and historical variances and concluded instruments were functioning properly from a precision aspect. We noticed small, persistent biases indicating slight instrument inaccuracies. We suggest that statistical tests for bias be incorporated in the MCP on a monthly basis and if the instrument bias is significantly greater than zero, the instrument should undergo maintenance. We propose the weekly precision test be replaced by a daily test to provide more timely detection of possible problems. We observed that one instrument showed a trend of increasing bias during the past six months and recommend a randomness test be incorporated to detect trends in a more timely fashion. We detected operator transcription errors during data transmissions and advise direct instrument transmission to the MCP to eliminate these errors. A transmission error rate based on those errors that affected decisions in the MCP was estimated as 1%. 11 refs., 10 figs., 4 tabs.

  20. Wind Measurements from Arc Scans with Doppler Wind Lidar

    SciTech Connect (OSTI)

    Wang, H.; Barthelmie, R. J.; Clifton, Andy; Pryor, S. C.

    2015-11-25

    When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of its high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.

  1. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  2. Three-dimensional hydrodynamics of the deceleration stage in inertial confinement fusion

    SciTech Connect (OSTI)

    Weber, C. R. Clark, D. S.; Cook, A. W.; Eder, D. C.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Jones, O. S.; Marinak, M. M.; Milovich, J. L.; Patel, P. K.; Robey, H. F.; Salmonson, J. D.; Sepke, S. M.; Thomas, C. A.

    2015-03-15

    The deceleration stage of inertial confinement fusion implosions is modeled in detail using three-dimensional simulations designed to match experiments at the National Ignition Facility. In this final stage of the implosion, shocks rebound from the center of the capsule, forming the high-temperature, low-density hot spot and slowing the incoming fuel. The flow field that results from this process is highly three-dimensional and influences many aspects of the implosion. The interior of the capsule has high-velocity motion, but viscous effects limit the range of scales that develop. The bulk motion of the hot spot shows qualitative agreement with experimental velocity measurements, while the variance of the hot spot velocity would broaden the DT neutron spectrum, increasing the inferred temperature by 400800?eV. Jets of ablator material are broken apart and redirected as they enter this dynamic hot spot. Deceleration stage simulations using two fundamentally different rad-hydro codes are compared and the flow field is found to be in good agreement.

  3. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

    SciTech Connect (OSTI)

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  4. Dakota :

    SciTech Connect (OSTI)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

  5. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    SciTech Connect (OSTI)

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  6. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    SciTech Connect (OSTI)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell; Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  7. Water Velocity Measurements on a Vertical Barrier Screen at the Bonneville Dam Second Powerhouse

    SciTech Connect (OSTI)

    Hughes, James S.; Deng, Zhiqun; Weiland, Mark A.; Martinez, Jayson J.; Yuan, Yong

    2011-11-22

    Fish screens at hydroelectric dams help to protect rearing and migrating fish by preventing them from passing through the turbines and directing them towards the bypass channels by providing a sweeping flow parallel to the screen. However, fish screens may actually be harmful to fish if they become impinged on the surface of the screen or become disoriented due to poor flow conditions near the screen. Recent modifications to the vertical barrier screens (VBS) at the Bonneville Dam second powerhouse (B2) intended to increase the guidance of juvenile salmonids into the juvenile bypass system (JBS) have resulted in high mortality and descaling rates of hatchery subyearling Chinook salmon during the 2008 juvenile salmonid passage season. To investigate the potential cause of the high mortality and descaling rates, an in situ water velocity measurement study was conducted using acoustic Doppler velocimeters (ADV) in the gatewell slot at Units 12A and 14A of B2. From the measurements collected the average approach velocity, sweep velocity, and the root mean square (RMS) value of the velocity fluctuations were calculated. The approach velocities measured across the face of the VBS varied but were mostly less than 0.3 m/s. The sweep velocities also showed large variances across the face of the VBS with most measurements being less than 1.5 m/s. This study revealed that the approach velocities exceeded criteria recommended by NOAA Fisheries and Washington State Department of Fish and Wildlife intended to improve fish passage conditions.

  8. On the equivalence of the RTI and SVM approaches to time correlated analysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Croft, S.; Favalli, A.; Henzlova, D.; Santi, P. A.

    2014-11-21

    Recently two papers on how to perform passive neutron auto-correlation analysis on time gated histograms formed from pulse train data, generically called time correlation analysis (TCA), have appeared in this journal [1,2]. For those of us working in international nuclear safeguards these treatments are of particular interest because passive neutron multiplicity counting is a widely deployed technique for the quantification of plutonium. The purpose of this letter is to show that the skewness-variance-mean (SVM) approach developed in [1] is equivalent in terms of assay capability to the random trigger interval (RTI) analysis laid out in [2]. Mathematically we could alsomore » use other numerical ways to extract the time correlated information from the histogram data including for example what we might call the mean, mean square, and mean cube approach. The important feature however, from the perspective of real world applications, is that the correlated information extracted is the same, and subsequently gets interpreted in the same way based on the same underlying physics model.« less

  9. Extragalactic foreground contamination in temperature-based CMB lens reconstruction

    SciTech Connect (OSTI)

    Osborne, Stephen J.; Hanson, Duncan; Dor, Olivier E-mail: dhanson@physics.mcgill.ca

    2014-03-01

    We discuss the effect of unresolved point source contamination on estimates of the CMB lensing potential, from components such as the thermal Sunyaev-Zel'dovich effect, radio point sources, and the Cosmic Infrared Background. We classify the possible trispectra associated with such source populations, and construct estimators for the amplitude and scale-dependence of several of the major trispectra. We show how to propagate analytical models for these source trispectra to biases for lensing. We also construct a ''source-hardened'' lensing estimator which experiences significantly smaller biases when exposed to unresolved point sources than the standard quadratic lensing estimator. We demonstrate these ideas in practice using the sky simulations of Sehgal et al., for cosmic-variance limited experiments designed to mimic ACT, SPT, and Planck. We find that for radio sources and SZ the bias is significantly reduced, but for CIB it is essentially unchanged. However, by using the high-frequency, all-sky CIB measurements from Planck and Herschel it may be possible to suppress this contribution.

  10. Estimations of atmospheric boundary layer fluxes and other turbulence parameters from Doppler lidar data

    SciTech Connect (OSTI)

    Tzvi Galchen; Mei Xu ); Eberhard, W.L. )

    1992-11-30

    This work is part of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), an international land-surface-atmosphere experiment aimed at improving the way climate models represent energy, water, heat, and carbon exchanges, and improving the utilization of satellite based remote sensing to monitor such parameters. Here the authors present results on doppler LIDAR measurements used to measure a range of turbulence parameters in the region of the unstable planetary boundary layer (PBL). The parameters include, averaged velocities, cartesian velocities, variances in velocities, parts of the covariance associated with vertical fluxes of horizontal momentum, and third moments of the vertical velocity. They explain their analysis technique, especially as it relates to error reduction of the averaged turbulence parameters from individual measurements with relatively large errors. The scales studied range from 150m to 12km. With this new diagnostic they address questions about the behavior of the convectively unstable PBL, as well as the stable layer which overlies it.

  11. Long-term Observations of the Convective Boundary Layer Using Insect Radar Returns at the SGP ARM Climate Research Facility

    SciTech Connect (OSTI)

    Chandra, A S; Kollias, P; Giangrande, S E; Klein, S A

    2009-08-20

    A long-term study of the turbulent structure of the convective boundary layer (CBL) at the U.S. Department of Energy Atmospheric Radiation Measurement Program (ARM) Southern Great Plains (SGP) Climate Research Facility is presented. Doppler velocity measurements from insects occupying the lowest 2 km of the boundary layer during summer months are used to map the vertical velocity component in the CBL. The observations cover four summer periods (2004-08) and are classified into cloudy and clear boundary layer conditions. Profiles of vertical velocity variance, skewness, and mass flux are estimated to study the daytime evolution of the convective boundary layer during these conditions. A conditional sampling method is applied to the original Doppler velocity dataset to extract coherent vertical velocity structures and to examine plume dimension and contribution to the turbulent transport. Overall, the derived turbulent statistics are consistent with previous aircraft and lidar observations. The observations provide unique insight into the daytime evolution of the convective boundary layer and the role of increased cloudiness in the turbulent budget of the subcloud layer. Coherent structures (plumes-thermals) are found to be responsible for more than 80% of the total turbulent transport resolved by the cloud radar system. The extended dataset is suitable for evaluating boundary layer parameterizations and testing large-eddy simulations (LESs) for a variety of surface and cloud conditions.

  12. Land Disposal Restrictions (LDR) program overview

    SciTech Connect (OSTI)

    Not Available

    1993-04-01

    The Hazardous and Solid Waste Amendments (HSWA) to the Resource Conservation and Recovery Act (RCRA) enacted in 1984 required the Environmental Protection Agency (EPA) to evaluate all listed and characteristic hazardous wastes according to a strict schedule and to develop requirements by which disposal of these wastes would be protective of human health and the environment. The implementing regulations for accomplishing this statutory requirement are established within the Land Disposal Restrictions (LDR) program. The LDR regulations (40 CFR Part 268) impose significant requirements on waste management operations and environmental restoration activities at DOE sites. For hazardous wastes restricted by statute from land disposal, EPA is required to set levels or methods of treatment that substantially reduce the waste`s toxicity or the likelihood that the waste`s hazardous constituents will migrate. Upon the specified LDR effective dates, restricted wastes that do not meet treatment standards are prohibited from land disposal unless they qualify for certain variances or exemptions. This document provides an overview of the LDR Program.

  13. Out-of-plane ultrasonic velocity measurement

    DOE Patents [OSTI]

    Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.

    1998-01-01

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.

  14. Out-of-plane ultrasonic velocity measurement

    DOE Patents [OSTI]

    Hall, M.S.; Brodeur, P.H.; Jackson, T.G.

    1998-07-14

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.

  15. Statistical Analysis of Variation in the Human Plasma Proteome

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Corzett, Todd H.; Fodor, Imola K.; Choi, Megan W.; Walsworth, Vicki L.; Turteltaub, Kenneth W.; McCutchen-Maloney, Sandra L.; Chromy, Brett A.

    2010-01-01

    Quantifying the variation in the human plasma proteome is an essential prerequisite for disease-specific biomarker detection. We report here on the longitudinal and individual variation in human plasma characterized by two-dimensional difference gel electrophoresis (2-D DIGE) using plasma samples from eleven healthy subjects collected three times over a two week period. Fixed-effects modeling was used to remove dye and gel variability. Mixed-effects modeling was then used to quantitate the sources of proteomic variation. The subject-to-subject variation represented the largest variance component, while the time-within-subject variation was comparable to the experimental variation found in a previous technical variability study where onemore » human plasma sample was processed eight times in parallel and each was then analyzed by 2-D DIGE in triplicate. Here, 21 protein spots had larger than 50% CV, suggesting that these proteins may not be appropriate as biomarkers and should be carefully scrutinized in future studies. Seventy-eight protein spots showing differential protein levels between different individuals or individual collections were identified by mass spectrometry and further characterized using hierarchical clustering. The results present a first step toward understanding the complexity of longitudinal and individual variation in the human plasma proteome, and provide a baseline for improved biomarker discovery.« less

  16. Measuring kinetic energy changes in the mesoscale with low acquisition rates

    SciTech Connect (OSTI)

    Roldn, .; Martnez, I. A.; Rica, R. A.; Dinis, L.

    2014-06-09

    We report on the measurement of the average kinetic energy changes in isothermal and non-isothermal quasistatic processes in the mesoscale, realized with a Brownian particle trapped with optical tweezers. Our estimation of the kinetic energy change allows to access to the full energetic description of the Brownian particle. Kinetic energy estimates are obtained from measurements of the mean square velocity of the trapped bead sampled at frequencies several orders of magnitude smaller than the momentum relaxation frequency. The velocity is tuned applying a noisy electric field that modulates the amplitude of the fluctuations of the position and velocity of the Brownian particle, whose motion is equivalent to that of a particle in a higher temperature reservoir. Additionally, we show that the dependence of the variance of the time-averaged velocity on the sampling frequency can be used to quantify properties of the electrophoretic mobility of a charged colloid. Our method could be applied to detect temperature gradients in inhomogeneous media and to characterize the complete thermodynamics of biological motors and of artificial micro and nanoscopic heat engines.

  17. Cosmic Shear Measurements with DES Science Verification Data

    SciTech Connect (OSTI)

    Becker, M. R.

    2015-07-20

    We present measurements of weak gravitational lensing cosmic shear two-point statistics using Dark Energy Survey Science Verification data. We demonstrate that our results are robust to the choice of shear measurement pipeline, either ngmix or im3shape, and robust to the choice of two-point statistic, including both real and Fourier-space statistics. Our results pass a suite of null tests including tests for B-mode contamination and direct tests for any dependence of the two-point functions on a set of 16 observing conditions and galaxy properties, such as seeing, airmass, galaxy color, galaxy magnitude, etc. We use a large suite of simulations to compute the covariance matrix of the cosmic shear measurements and assign statistical significance to our null tests. We find that our covariance matrix is consistent with the halo model prediction, indicating that it has the appropriate level of halo sample variance. We also compare the same jackknife procedure applied to the data and the simulations in order to search for additional sources of noise not captured by the simulations. We find no statistically significant extra sources of noise in the data. The overall detection significance with tomography for our highest source density catalog is 9.7σ. Cosmological constraints from the measurements in this work are presented in a companion paper (DES et al. 2015).

  18. A fast contour descriptor algorithm for supernova imageclassification

    SciTech Connect (OSTI)

    Aragon, Cecilia R.; Aragon, David Bradburn

    2006-07-16

    We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.

  19. Standard Methods of Characterizing Performance of Fan FilterUnits, Version 3.0

    SciTech Connect (OSTI)

    Xu, Tengfang

    2007-01-01

    We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.

  20. Shear wall ultimate drift limits

    SciTech Connect (OSTI)

    Duffey, T.A.; Goldman, A.; Farrar, C.R.

    1994-04-01

    Drift limits for reinforced-concrete shear walls are investigated by reviewing the open literature for appropriate experimental data. Drift values at ultimate are determined for walls with aspect ratios ranging up to a maximum of 3.53 and undergoing different types of lateral loading (cyclic static, monotonic static, and dynamic). Based on the geometry of actual nuclear power plant structures exclusive of containments and concerns regarding their response during seismic (i.e.,cyclic) loading, data are obtained from pertinent references for which the wall aspect ratio is less than or equal to approximately 1, and for which testing is cyclic in nature (typically displacement controlled). In particular, lateral deflections at ultimate load, and at points in the softening region beyond ultimate for which the load has dropped to 90, 80, 70, 60, and 50 percent of its ultimate value, are obtained and converted to drift information. The statistical nature of the data is also investigated. These data are shown to be lognormally distributed, and an analysis of variance is performed. The use of statistics to estimate Probability of Failure for a shear wall structure is illustrated.

  1. SU-E-J-16: A Review of the Magnitude of Patient Imaging Shifts in Relation to Departmental Policy Changes

    SciTech Connect (OSTI)

    O'Connor, M; Sansourekidou, P

    2014-06-01

    Purpose: To evaluate how changes in imaging policy affect the magnitude of shifts applied to patients. Methods: In June 2012, the department's imaging policy was altered to require that any shifts derived from imaging throughout the course of treatment shall be considered systematic only after they were validated with two data points that are consistent in the same direction. Multiple additions and clarifications to the imaging policy were implemented throughout the course of the data collection, but they were mostly of administrative nature. Entered shifts were documented in MOSAIQ (Elekta AB) through the localization offset. The MOSAIQ database was queried to identify a possible trend. A total of 25,670 entries were analyzed, including four linear accelerators with a combination of MV planar, kV planar and kV three dimensional imaging. The monthly average of the magnitude of the vector was used. Plan relative offsets were excluded. During the evaluated period of time, one of the satellite facilities acquired and implemented Vision RT (AlignRT Inc). Results: After the new policy was implemented the shifts variance and standard deviation decreased. The decrease is linear with time elapsed. Vision RT implementation at one satellite facility reduced the number of overall shifts, specifically for breast patients. Conclusion: Changes in imaging policy have a significant effect on the magnitude of shifts applied to patients. Using two statistical points before applying a shift as persistent decreased the overall magnitude of the shifts applied to patients.

  2. Lessons Learned from the Application of Bulk Characterization to Individual Containers on the Brookhaven Graphite Research Reactor Decommissioning Project at Brookhaven National Laboratory - 12056

    SciTech Connect (OSTI)

    Kneitel, Terri; Rocco, Diane

    2012-07-01

    When conducting environmental cleanup or decommissioning projects, characterization of the material to be removed is often performed when the material is in-situ. The actual demolition or excavation and removal of the material can result in individual containers that vary significantly from the original bulk characterization profile. This variance, if not detected, can result in individual containers exceeding Department of Transportation regulations or waste disposal site acceptance criteria. Bulk waste characterization processes were performed to initially characterize the Brookhaven Graphite Research Reactor (BGRR) graphite pile and this information was utilized to characterize all of the containers of graphite. When the last waste container was generated containing graphite dust from the bottom of the pile, but no solid graphite blocks, the material contents were significantly different in composition from the bulk waste characterization. This error resulted in exceedance of the disposal site waste acceptance criteria. Brookhaven Science Associates initiated an in-depth investigation to identify the root causes of this failure and to develop appropriate corrective actions. The lessons learned at BNL have applicability to other cleanup and demolition projects which characterize their wastes in bulk or in-situ and then extend that characterization to individual containers. (authors)

  3. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    SciTech Connect (OSTI)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  4. Analysis and synthesis of the variability of irradiance and PV power time series with the wavelet transform

    SciTech Connect (OSTI)

    Perpinan, O.; Lorenzo, E.

    2011-01-15

    The irradiance fluctuations and the subsequent variability of the power output of a PV system are analysed with some mathematical tools based on the wavelet transform. It can be shown that the irradiance and power time series are nonstationary process whose behaviour resembles that of a long memory process. Besides, the long memory spectral exponent {alpha} is a useful indicator of the fluctuation level of a irradiance time series. On the other side, a time series of global irradiance on the horizontal plane can be simulated by means of the wavestrapping technique on the clearness index and the fluctuation behaviour of this simulated time series correctly resembles the original series. Moreover, a time series of global irradiance on the inclined plane can be simulated with the wavestrapping procedure applied over a signal previously detrended by a partial reconstruction with a wavelet multiresolution analysis, and, once again, the fluctuation behaviour of this simulated time series is correct. This procedure is a suitable tool for the simulation of irradiance incident over a group of distant PV plants. Finally, a wavelet variance analysis and the long memory spectral exponent show that a PV plant behaves as a low-pass filter. (author)

  5. One-electron reduced density matrices of strongly correlated harmonium atoms

    SciTech Connect (OSTI)

    Cioslowski, Jerzy

    2015-03-21

    Explicit asymptotic expressions are derived for the reduced one-electron density matrices (the 1-matrices) of strongly correlated two- and three-electron harmonium atoms in the ground and first excited states. These expressions, which are valid at the limit of small confinement strength ?, yield electron densities and kinetic energies in agreement with the published values. In addition, they reveal the ?{sup 5/6} asymptotic scaling of the exchange components of the electron-electron repulsion energies that differs from the ?{sup 2/3} scaling of their Coulomb and correlation counterparts. The natural orbitals of the totally symmetric ground state of the two-electron harmonium atom are found to possess collective occupancies that follow a mixed power/Gaussian dependence on the angular momentum in variance with the simple power-law prediction of Hills asymptotics. Providing rigorous constraints on energies as functionals of 1-matrices, these results are expected to facilitate development of approximate implementations of the density matrix functional theory and ensure their proper description of strongly correlated systems.

  6. Species interactions differ in their genetic robustness

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chubiz, Lon M.; Granger, Brian R.; Segre, Daniel; Harcombe, William R.

    2015-04-14

    Conflict and cooperation between bacterial species drive the composition and function of microbial communities. Stability of these emergent properties will be influenced by the degree to which species' interactions are robust to genetic perturbations. We use genome-scale metabolic modeling to computationally analyze the impact of genetic changes when Escherichia coli and Salmonella enterica compete, or cooperate. We systematically knocked out in silico each reaction in the metabolic network of E. coli to construct all 2583 mutant stoichiometric models. Then, using a recently developed multi-scale computational framework, we simulated the growth of each mutant E. coli in the presence of S.more » enterica. The type of interaction between species was set by modulating the initial metabolites present in the environment. We found that the community was most robust to genetic perturbations when the organisms were cooperating. Species ratios were more stable in the cooperative community, and community biomass had equal variance in the two contexts. Additionally, the number of mutations that have a substantial effect is lower when the species cooperate than when they are competing. In contrast, when mutations were added to the S. enterica network the system was more robust when the bacteria were competing. These results highlight the utility of connecting metabolic mechanisms and studies of ecological stability. Cooperation and conflict alter the connection between genetic changes and properties that emerge at higher levels of biological organization.« less

  7. 2D stochastic-integral models for characterizing random grain noise in titanium alloys

    SciTech Connect (OSTI)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.

    2014-02-18

    We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Love (K-L) expansion for the random Euler angles, ? and ?, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.

  8. A Two-Stage Kalman Filter Approach for Robust and Real-Time Power System State Estimation

    SciTech Connect (OSTI)

    Zhang, Jinghe; Welch, Greg; Bishop, Gary; Huang, Zhenyu

    2014-04-01

    As electricity demand continues to grow and renewable energy increases its penetration in the power grid, realtime state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by Phasor Measurement Units (PMU). In this paper we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However in practice the process and measurement noise models are usually difficult to obtain. Thus we have developed the Adaptive Kalman Filter with Inflatable Noise Variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. Simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements.

  9. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect (OSTI)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  10. IMPROVED VARIABLE STAR SEARCH IN LARGE PHOTOMETRIC DATA SETS: NEW VARIABLES IN CoRoT FIELD LRa02 DETECTED BY BEST II

    SciTech Connect (OSTI)

    Fruth, T.; Cabrera, J.; Csizmadia, Sz.; Eigmueller, P.; Erikson, A.; Kirste, S.; Pasternacki, T.; Rauer, H.; Titz-Weider, R.; Kabath, P.; Chini, R.; Lemke, R.; Murphy, M.

    2012-06-15

    The CoRoT field LRa02 has been observed with the Berlin Exoplanet Search Telescope II (BEST II) during the southern summer 2007/2008. A first analysis of stellar variability led to the publication of 345 newly discovered variable stars. Now, a deeper analysis of this data set was used to optimize the variability search procedure. Several methods and parameters have been tested in order to improve the selection process compared to the widely used J index for variability ranking. This paper describes an empirical approach to treat systematic trends in photometric data based upon the analysis of variance statistics that can significantly decrease the rate of false detections. Finally, the process of reanalysis and method improvement has virtually doubled the number of variable stars compared to the first analysis by Kabath et al. A supplementary catalog of 272 previously unknown periodic variables plus 52 stars with suspected variability is presented. Improved ephemerides are given for 19 known variables in the field. In addition, the BEST II results are compared with CoRoT data and its automatic variability classification.

  11. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    SciTech Connect (OSTI)

    Farlotti, M.; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)

  12. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

    SciTech Connect (OSTI)

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  13. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

    SciTech Connect (OSTI)

    Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  14. Fluctuation statistics of mesoscopic Bose-Einstein condensates: Reconciling the master equation with the partition function to reexamine the Uhlenbeck-Einstein dilemma

    SciTech Connect (OSTI)

    Jordan, Andrew N.; Ooi, C. H. Raymond; Svidzinsky, Anatoly A.

    2006-09-15

    The atom fluctuation statistics of an ideal, mesoscopic, Bose-Einstein condensate are investigated from several different perspectives. By generalizing the grand canonical analysis (applied to the canonical ensemble problem), we obtain a self-consistent equation for the mean condensate particle number that coincides with the microscopic result calculated from the laser master equation approach. For the case of a harmonic trap, we obtain an analytic expression for the condensate particle number that is very accurate at all temperatures, when compared with numerical canonical ensemble results. Applying a similar generalized grand canonical treatment to the variance, we obtain an accurate result only below the critical temperature. Analytic results are found for all higher moments of the fluctuation distribution by employing the stochastic path integral formalism, with excellent accuracy. We further discuss a hybrid treatment, which combines the master equation and stochastic path integral analysis with results obtained based on the canonical ensemble quasiparticle formalism [Kocharovsky et al., Phys. Rev. A 61, 053606 (2000)], producing essentially perfect agreement with numerical simulation at all temperatures.

  15. Nonstationary stochastic charge fluctuations of a dust particle in plasmas

    SciTech Connect (OSTI)

    Shotorban, B.

    2011-06-15

    Stochastic charge fluctuations of a dust particle that are due to discreteness of electrons and ions in plasmas can be described by a one-step process master equation [T. Matsoukas and M. Russell, J. Appl. Phys. 77, 4285 (1995)] with no exact solution. In the present work, using the system size expansion method of Van Kampen along with the linear noise approximation, a Fokker-Planck equation with an exact Gaussian solution is developed by expanding the master equation. The Gaussian solution has time-dependent mean and variance governed by two ordinary differential equations modeling the nonstationary process of dust particle charging. The model is tested via the comparison of its results to the results obtained by solving the master equation numerically. The electron and ion currents are calculated through the orbital motion limited theory. At various times of the nonstationary process of charging, the model results are in a very good agreement with the master equation results. The deviation is more significant when the standard deviation of the charge is comparable to the mean charge in magnitude.

  16. Optimization of Micro Metal Injection Molding By Using Grey Relational Grade

    SciTech Connect (OSTI)

    Ibrahim, M. H. I. [Dept. Of Mechanical Engineering, Universiti Tun Hussein Onn Malaysia (UTHM), 86400 Parit Raja, Batu Pahat, Johor (Malaysia); Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia); Muhamad, N.; Sulong, A. B.; Nor, N. H. M.; Harun, M. R.; Murtadhahadi [Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia); Jamaludin, K. R. [UTM Razak School of Engineering and Advanced Technology, UTM International Campus, 54100 Jalan Semarak, Kuala Lumpur (Malaysia)

    2011-01-17

    Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising method towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where Taguchi method associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversion from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.

  17. Wind Measurements from Arc Scans with Doppler Wind Lidar

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wang, H.; Barthelmie, R. J.; Clifton, Andy; Pryor, S. C.

    2015-11-25

    When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of itsmore » high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.« less

  18. A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis Revision 2

    SciTech Connect (OSTI)

    Narlesky, Joshua Edward; Kelly, Elizabeth J.

    2015-09-10

    This report documents the new PG calibration regression equation. These calibration equations incorporate new data that have become available since revision 1 of A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis was issued [3] The calibration equations are based on a weighted least squares (WLS) approach for the regression. The WLS method gives each data point its proper amount of influence over the parameter estimates. This gives two big advantages, more precise parameter estimates and better and more defensible estimates of uncertainties. The WLS approach makes sense both statistically and experimentally because the variances increase with concentration, and there are physical reasons that the higher measurements are less reliable and should be less influential. The new magnesium calibration includes a correction for sodium and separate calibration equation for items with and without chlorine. These additional calibration equations allow for better predictions and smaller uncertainties for sodium in materials with and without chlorine. Chlorine and sodium have separate equations for RICH materials. Again, these equations give better predictions and smaller uncertainties chlorine and sodium for RICH materials.

  19. SIMPLIFIED PHYSICS BASED MODELSRESEARCH TOPICAL REPORT ON TASK #2

    SciTech Connect (OSTI)

    Mishra, Srikanta; Ganesh, Priya

    2014-10-31

    We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.

  20. Pre-test CFD Calculations for a Bypass Flow Standard Problem

    SciTech Connect (OSTI)

    Rich Johnson

    2011-11-01

    The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.

  1. A study on dependence of the structural, optical and electrical properties of cadmium lead sulphide thin films on Cd/Pb ratio

    SciTech Connect (OSTI)

    Nair, Sinitha B. E-mail: anithakklm@gmail.com; Abraham, Anitha E-mail: anithakklm@gmail.com; Philip, Rachel Reena; Pradeep, B.; Shripathi, T. E-mail: vganesancsr@gmail.com; Ganesan, V. E-mail: vganesancsr@gmail.com

    2014-10-15

    Cadmium Lead Sulphide thin films with systematic variation in Cd/Pb ratio are prepared at 333K by CBD, adjusting the reagent-molarity, deposition time and pH. XRD exhibits crystalline-amorphous transition as Cd% exceeds Pb%. AFM shows agglomeration of crystallites of size ?505 nm. EDAX assess the composition whereas XPS ascertains the ternary formation, with binding energies of Pb4f{sub 7/2} and 4f{sub 5/2}, Cd3d{sub 5/2} and 3d{sub 3/2} and S2p at 137.03, 141.606, 404.667, 412.133 and 160.218 eV respectively. The optical absorption spectra reveal the variance in the direct allowed band gaps, from 1.57eV to 2.42 eV as Cd/Pb ratio increases from 0.2 to 2.7, suggesting possibility of band gap engineering in the n-type films.

  2. Testing of nuclear grade lubricants and their effects on A540 B24 and A193 B7 bolting materials

    SciTech Connect (OSTI)

    Czajkowski, C.J.

    1985-01-01

    An investigation was performed on eleven commonly used lubricants by the nuclear power industry. The investigation included EDS analysis of the lubricants, notched-tensile constant extension rate testing of bolting materials with the lubricants, frictional testing of the lubricants and weight loss testing of a bonded solid film lubricant. The report generally concludes that there is a significant amount of variance in the mechanical properties of common bolting materials; that MoS/sub 2/ can hydrolyze to form H/sub 2/S at 100/sup 0/C and cause stress corrosion cracking (SCC) of bolting materials, and that the use of copper-containing lubricants can be potentially detrimental to high strength steels in an aqueous environment. Additionally, the testing of various lubricants disclosed that some lubricants contain potentially detrimental elements (e.g. S, Sb) which can promote SCC of the common bolting materials. One of the most significant findings of this report is the observation that both A193 B7 and A540 B24 bolting materials are susceptible to transgranular stress corrosion cracking in demineralized H/sub 2/O at 280/sup 0/C in notched tensile tests.

  3. Characteristics of surface current flow inferred from a global ocean current data set

    SciTech Connect (OSTI)

    Meehl, G.A.

    1982-06-01

    A seasonal global ocean-current data set (OCDS) digitized on a 5/sup 0/ grid from long-term mean shipdrift-derived currents from pilot charts is presented and described. Annual zonal means of v-component currents show subtropical convergence zones which moved closest to the equator during the respective winters in each hemisphere. Net annual v-component surface flow at the equator is northward. Zonally average u-component currents have greatest seasonal variance in the tropics with strongest westward currents in the winter hemisphere. An ensemble of ocean currents measured by buoys and current meters compares favorably with OCDS data in spite of widely varying time and space scales. The OCDS currents and directly measured currents are about twice as large as computed geostrophic currents. An analysis of equatorial Pacific currents suggests that dynamic topography and sea-level change indicative of the geostrophic flow component cannot be relied on solely to infer absolute strength of surface currents which include a strong Ekman component. Comparison of OCDS v-component currents and meridional transports predicted by Ekman theory shows agreement in the sign of transports in the midlatitudes and tropics in both hemispheres. Ekman depths required to scale OCDS v-component currents to computed Ekman transports are reasonable at most latitudes with layer depths deepening closer to the equator.

  4. Analytical Chemistry Laboratory Quality Assurance Project Plan for the Transuranic Waste Characterization Program

    SciTech Connect (OSTI)

    Sailer, S.J.

    1996-08-01

    This Quality Assurance Project Plan (QAPJP) specifies the quality of data necessary and the characterization techniques employed at the Idaho National Engineering Laboratory (INEL) to meet the objectives of the Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP) Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) requirements. This QAPJP is written to conform with the requirements and guidelines specified in the QAPP and the associated documents referenced in the QAPP. This QAPJP is one of a set of five interrelated QAPjPs that describe the INEL Transuranic Waste Characterization Program (TWCP). Each of the five facilities participating in the TWCP has a QAPJP that describes the activities applicable to that particular facility. This QAPJP describes the roles and responsibilities of the Idaho Chemical Processing Plant (ICPP) Analytical Chemistry Laboratory (ACL) in the TWCP. Data quality objectives and quality assurance objectives are explained. Sample analysis procedures and associated quality assurance measures are also addressed; these include: sample chain of custody; data validation; usability and reporting; documentation and records; audits and 0385 assessments; laboratory QC samples; and instrument testing, inspection, maintenance and calibration. Finally, administrative quality control measures, such as document control, control of nonconformances, variances and QA status reporting are described.

  5. Computaional Modeling of the Stability of Crevice Corrosion of Wetted SS316L

    SciTech Connect (OSTI)

    F. Cui; F.J. Presuel-Moreno; R.G. Kelly

    2006-04-17

    The stability of localized corrosion sites on SS 316L exposed to atmospheric conditions was studied computationally. The localized corrosion system was decoupled computationally by considering the wetted cathode and the crevice anode separately and linking them via a constant potential boundary condition at the mouth of the crevice. The potential of interest for stability was the repassivation potential. The limitations on the ability of the cathode that are inherent due to the restricted geometry were assessed in terms of the dependence on physical and electrochemical parameters. Physical parameters studied include temperature, electrolyte layer thickness, solution conductivity, and the size of the cathode, as well as the crevice gap for the anode. The current demand of the crevice was determined considering a constant crevice solution composition that simulates the critical crevice solution as described in the literature. An analysis of variance showed that the solution conductivity and the length of the cathode were the most important parameters in determining the total cathodic current capacity of the external surface. A semi-analytical equation was derived for the total current from a restricted geometry held at a constant potential at one end. The equation was able to reproduce all the model computation results both for the wetted external cathode and the crevice and give good explanation on the effects of physicochemical and kinetic parameters.

  6. The Role of Landscape in the Distribution of Deer-Vehicle Collisions in South Mississippi

    SciTech Connect (OSTI)

    McKee, Jacob J; Cochran, David

    2012-01-01

    Deer-vehicle collisions (DVCs) have a negative impact on the economy, traffic safety, and the general well-being of otherwise healthy deer. To mitigate DVCs, it is imperative to gain a better understanding of factors that play a role in their spatial distribution. Much of the existing research on DVCs in the United States has been inconclusive, pointing to a variety of causal factors that seem more specific to study site and region than indicative of broad patterns. Little DVC research has been conducted in the southern United States, making the region particularly important with regard to this issue. In this study, we evaluate landscape factors that contributed to the distribution of 347 DVCs that occurred in Forrest and Lamar Counties of south Mississippi, from 2006 to 2009. Using nearest-neighbor and discriminant analysis, we demonstrate that DVCs in south Mississippi are not random spatial phenomena. We also develop a classification model that identified seven landscape metrics, explained 100% of the variance, and could distinguish DVCs from control sites with an accuracy of 81.3 percent.

  7. Bryan Mound SPR cavern 113 remedial leach stage 1 analysis.

    SciTech Connect (OSTI)

    Rudeen, David Keith; Weber, Paula D.; Lord, David L.

    2013-08-01

    The U.S. Strategic Petroleum Reserve implemented the first stage of a leach plan in 2011-2012 to expand storage volume in the existing Bryan Mound 113 cavern from a starting volume of 7.4 million barrels (MMB) to its design volume of 11.2 MMB. The first stage was terminated several months earlier than expected in August, 2012, as the upper section of the leach zone expanded outward more quickly than design. The oil-brine interface was then re-positioned with the intent to resume leaching in the second stage configuration. This report evaluates the as-built configuration of the cavern at the end of the first stage, and recommends changes to the second stage plan in order to accommodate for the variance between the first stage plan and the as-built cavern. SANSMIC leach code simulations are presented and compared with sonar surveys in order to aid in the analysis and offer projections of likely outcomes from the revised plan for the second stage leach.

  8. Daily diaries of respiratory symptoms and air pollution: Methodological issues and results

    SciTech Connect (OSTI)

    Schwartz, J. ); Wypij, D.; Dockery D.; Ware, J.; Spengler, J.; Ferris, B. Jr. ); Zeger, S. )

    1991-01-01

    Daily diaries of respiratory symptoms are a powerful technique for detecting acute effects of air pollution exposure. While conceptually simple, these diary studies can be difficult to analyze. The daily symptom rates are highly correlated, even after adjustment for covariates, and this lack of independence must be considered in the analysis. Possible approaches include the use of incidence instead of prevalence rates and autoregressive models. Heterogeneity among subjects also induces dependencies in the data. These can be addressed by stratification and by two-stage models such as those developed by Korn and Whittemore. These approaches have been applied to two data sets: a cohort of school children participating in the Harvard Six Cities Study and a cohort of student nurses in Los Angeles. Both data sets provide evidence of autocorrelation and heterogeneity. Controlling for autocorrelation corrects the precision estimates, and because diary data are usually positively autocorrelated, this leads to larger variance estimates. Controlling for heterogeneity among subjects appears to increase the effect sizes for air pollution exposure. Preliminary results indicate associations between sulfur dioxide and cough incidence in children and between nitrogen dioxide and phlegm incidence in student nurses.

  9. Optimization of the pyrolysis process of empty fruit bunch (EFB) in a fixed-bed reactor through a central composite design (CCD)

    SciTech Connect (OSTI)

    Mohamed, Alina Rahayu; Hamzah, Zainab; Daud, Mohamed Zulkali Mohamed

    2014-07-10

    The production of crude palm oil from the processing of palm fresh fruit bunches in the palm oil mills in Malaysia hs resulted in a huge quantity of empty fruit bunch (EFB) accumulated. The EFB was used as a feedstock in the pyrolysis process using a fixed-bed reactor in the present study. The optimization of process parameters such as pyrolysis temperature (factor A), biomass particle size (factor B) and holding time (factor C) were investigated through Central Composite Design (CCD) using Stat-Ease Design Expert software version 7 with bio-oil yield considered as the response. Twenty experimental runs were conducted. The results were completely analyzed by Analysis of Variance (ANOVA). The model was statistically significant. All factors studied were significant with p-values < 0.05. The pyrolysis temperature (factor A) was considered as the most significant parameter because its F-value of 116.29 was the highest. The value of R{sup 2} was 0.9564 which indicated that the selected factors and its levels showed high correlation to the production of bio-oil from EFB pyrolysis process. A quadratic model equation was developed and employed to predict the highest theoretical bio-oil yield. The maximum bio-oil yield of 46.2 % was achieved at pyrolysis temperature of 442.15 C using the EFB particle size of 866 ?m which corresponded to the EFB particle size in the range of 7101000 ?m and holding time of 483 seconds.

  10. Parameters affecting resin-anchored cable bolt performance: Results of in situ evaluations

    SciTech Connect (OSTI)

    Zelanko, J.C.; Mucho, T.P.; Compton, C.S.; Long, L.E.; Bailey, P.E.

    1995-11-01

    Cable bolt support techniques, including hardware and anchorage systems, continue to evolve to meet US mining requirements. For cable support systems to be successfully implemented into new ground control areas, the mechanics of this support and the potential range of performance need to be better understood. To contribute to this understanding, a series of 36 pull tests were performed on 10 ft long cable bolts using various combinations of hole diameters, resin formulations, anchor types, and with and without resin dams. These test provided insight as to the influence of these four parameters on cable system performance. Performance was assessed in terms of support capacity (maximum load attained in a pull test), system stiffness (assessed from two intervals of load-deformation), and from the general load-deformation response. Three characteristic load-deformation responses were observed. An Analysis of Variance identified a number of main effects and interactions of significance to support capacity and stiffness. The factorial experiment performed in this study provides insight to the effects of several design parameters associated with resin-anchored cable bolts.

  11. Closing Rocky Flats by 2006

    SciTech Connect (OSTI)

    Tuor, N. R.; Schubert, A. L.

    2002-02-26

    Safely accelerating the closure of Rocky Flats to 2006 is a goal shared by many: the State of Colorado, the communities surrounding the site, the U.S. Congress, the Department of Energy, Kaiser-Hill and its team of subcontractors, the site's employees, and taxpayers across the country. On June 30, 2000, Kaiser-Hill (KH) submitted to the Department of Energy (DOE), KH's plan to achieve closure of Rocky Flats by December 15, 2006, for a remaining cost of $3.96 billion (February 1, 2000, to December 15, 2006). The Closure Project Baseline (CPB) is the detailed project plan for accomplishing this ambitious closure goal. This paper will provide a status report on the progress being made toward the closure goal. This paper will: provide a summary of the closure contract completion criteria; give the current cost and schedule variance of the project and the status of key activities; detail important accomplishments of the past year; and discuss the challenges ahead.

  12. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    SciTech Connect (OSTI)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.

  13. A stochastic approach to quantifying the blur with uncertainty estimation for high-energy X-ray imaging systems

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.

    2015-06-03

    One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less

  14. Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems

    SciTech Connect (OSTI)

    McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael; Stemler, Thomas

    2015-05-15

    We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rssler system and find that periodic dynamics translate to ring structures whereas chaotic time series translate to band or tube-like structuresthereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.

  15. Comparison of MCNP6 and experimental results for neutron counts, Rossi-{alpha}, and Feynman-{alpha} distributions

    SciTech Connect (OSTI)

    Talamo, A.; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2013-07-01

    MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by {sup 3}He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-{alpha}, and Feynman-{alpha}. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)

  16. CHARACTERIZATION OF TRANSITIONS IN THE SOLAR WIND PARAMETERS

    SciTech Connect (OSTI)

    Perri, S.; Balogh, A. E-mail: a.balogh@imperial.ac.u

    2010-02-20

    The distinction between fast and slow solar wind streams and the dynamically evolved interaction regions is reflected in the characteristic fluctuations of both the solar wind and the embedded magnetic field. High-resolution magnetic field data from the Ulysses spacecraft have been analyzed. The observations show rapid variations in the magnetic field components and in the magnetic field strength, suggesting a structured nature of the solar wind at small scales. The typical sizes of fluctuations cover a broad range. If translated to the solar surface, the scales span from the size of granules ({approx}10{sup 3} km) and supergranules ({approx}10{sup 4} km) on the Sun down to {approx}10{sup 2} km and less. The properties of the short time structures change in the different types of solar wind. While fluctuations in fast streams are more homogeneous, slow streams present a bursty behavior in the magnetic field variances, and the regions of transition are characterized by high levels of power in narrow structures around the transitions. The probability density functions of the magnetic field increments at several scales reveal a higher level of intermittency in the mixed streams, which is related to the presence of well localized features. It is concluded that, apart from the differences in the nature of fluctuations in flows of different coronal origin, there is a small-scale structuring that depends on the origin of streams themselves but it is also related to a bursty generation of the fluctuations.

  17. RELAXATION OF WARPED DISKS: THE CASE OF PURE HYDRODYNAMICS

    SciTech Connect (OSTI)

    Sorathia, Kareem A.; Krolik, Julian H.; Hawley, John F.

    2013-05-10

    Orbiting disks may exhibit bends due to a misalignment between the angular momentum of the inner and outer regions of the disk. We begin a systematic simulational inquiry into the physics of warped disks with the simplest case: the relaxation of an unforced warp under pure fluid dynamics, i.e., with no internal stresses other than Reynolds stress. We focus on the nonlinear regime in which the bend rate is large compared to the disk aspect ratio. When warps are nonlinear, strong radial pressure gradients drive transonic radial motions along the disk's top and bottom surfaces that efficiently mix angular momentum. The resulting nonlinear decay rate of the warp increases with the warp rate and the warp width, but, at least in the parameter regime studied here, is independent of the sound speed. The characteristic magnitude of the associated angular momentum fluxes likewise increases with both the local warp rate and the radial range over which the warp extends; it also increases with increasing sound speed, but more slowly than linearly. The angular momentum fluxes respond to the warp rate after a delay that scales with the square root of the time for sound waves to cross the radial extent of the warp. These behaviors are at variance with a number of the assumptions commonly used in analytic models to describe linear warp dynamics.

  18. Sensitivity testing and analysis

    SciTech Connect (OSTI)

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  19. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    SciTech Connect (OSTI)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N

    2014-06-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  20. SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Knowledge Advancement.

    SciTech Connect (OSTI)

    Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.; Ross, Kyle; Cardoni, Jeffrey N; Kalinich, Donald A.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Ghosh, S. Tina

    2014-02-01

    This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)

  1. Evaluation of bulk paint worker exposure to solvents at household hazardous waste collection events

    SciTech Connect (OSTI)

    Cameron, M.

    1995-09-01

    In fiscal year 93/94, over 250 governmental agencies were involved in the collection of household hazardous wastes in the State of California. During that time, over 3,237,000 lbs. of oil based paint were collected in 9,640 drums. Most of this was in lab pack drums, which can only hold up to 20 one gallon cans. Cost for disposal of such drums is approximately $1000. In contrast, during the same year, 1,228,000 lbs. of flammable liquid were collected in 2,098 drums in bulk form. Incineration of bulked flammable liquids is approximately $135 per drum. Clearly, it is most cost effective to bulk flammable liquids at household hazardous waste events. Currently, this is the procedure used at most Temporary Household Hazardous Waste Collection Facilities (THHWCFs). THHWCFs are regulated by the Department of Toxic Substances Control (DTSC) under the new Permit-by Rule Regulations. These regulations specify certain requirements regarding traffic flow, emergency response notifications and prevention of exposure to the public. The regulations require that THHWCF operators bulk wastes only when the public is not present. [22 CCR, section 67450.4 (e) (2) (A)].Santa Clara County Environmental Health Department sponsors local THHWCF`s and does it`s own bulking. In order to save time and money, a variance from the regulation was requested and an employee monitoring program was initiated to determine actual exposure to workers. Results are presented.

  2. MAVTgsa: An R Package for Gene Set (Enrichment) Analysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chien, Chih-Yi; Chang, Ching-Wei; Tsai, Chen-An; Chen, James J.

    2014-01-01

    Gene semore » t analysis methods aim to determine whether an a priori defined set of genes shows statistically significant difference in expression on either categorical or continuous outcomes. Although many methods for gene set analysis have been proposed, a systematic analysis tool for identification of different types of gene set significance modules has not been developed previously. This work presents an R package, called MAVTgsa, which includes three different methods for integrated gene set enrichment analysis. (1) The one-sided OLS (ordinary least squares) test detects coordinated changes of genes in gene set in one direction, either up- or downregulation. (2) The two-sided MANOVA (multivariate analysis variance) detects changes both up- and downregulation for studying two or more experimental conditions. (3) A random forests-based procedure is to identify gene sets that can accurately predict samples from different experimental conditions or are associated with the continuous phenotypes. MAVTgsa computes the P values and FDR (false discovery rate) q -value for all gene sets in the study. Furthermore, MAVTgsa provides several visualization outputs to support and interpret the enrichment results. This package is available online.« less

  3. PHOTOSPHERIC EMISSION FROM STRATIFIED JETS

    SciTech Connect (OSTI)

    Ito, Hirotaka; Nagataki, Shigehiro; Ono, Masaomi; Lee, Shiu-Hang; Mao, Jirong; Yamada, Shoichi; Pe'er, Asaf; Mizuta, Akira; Harikae, Seiji

    2013-11-01

    We explore photospheric emissions from stratified two-component jets, wherein a highly relativistic spine outflow is surrounded by a wider and less relativistic sheath outflow. Thermal photons are injected in regions of high optical depth and propagated until the photons escape at the photosphere. Because of the presence of shear in velocity (Lorentz factor) at the boundary of the spine and sheath region, a fraction of the injected photons are accelerated using a Fermi-like acceleration mechanism such that a high-energy power-law tail is formed in the resultant spectrum. We show, in particular, that if a velocity shear with a considerable variance in the bulk Lorentz factor is present, the high-energy part of observed gamma-ray bursts (GRBs) photon spectrum can be explained by this photon acceleration mechanism. We also show that the accelerated photons might also account for the origin of the extra-hard power-law component above the bump of the thermal-like peak seen in some peculiar bursts (e.g., GRB 090510, 090902B, 090926A). We demonstrate that time-integrated spectra can also reproduce the low-energy spectrum of GRBs consistently using a multi-temperature effect when time evolution of the outflow is considered. Last, we show that the empirical E{sub p}-L{sub p} relation can be explained by differences in the outflow properties of individual sources.

  4. Spoil handling and reclamation costs at a contour surface mine in steep slope Appalachian topography

    SciTech Connect (OSTI)

    Zipper, C.E.; Hall, A.T.; Daniels, W.L.

    1985-12-09

    Accurate overburden handling cost estimation methods are essential to effective pre-mining planning for post-mining landforms and land uses. With the aim of developing such methods, the authors have been monitoring costs at a contour surface mine in Wise County, Virginia since January 1, 1984. Early in the monitoring period, the land was being returned to its Approximate Original Contour (AOC) in a manner common to the Appalachian region since implementation of the Surface Mining Control and Reclamation Act of 1977 (SMCRA). More recently, mining has been conducted under an experimental variance from the AOC provisions of SMCRA which allowed a near-level bench to be constructed across the upper surface of two mined points and an intervening filled hollow. All mining operations are being recorded by location. The cost of spoil movement is calculated for each block of coal mined between January 1, 1984, and August 1, 1985. Per cubic yard spoil handling and reclamation costs are compared by mining block. The average cost of spoil handling was $1.90 per bank cubic yard; however, these costs varied widely between blocks. The reasons for those variations included the landscape positions of the mining blocks and spoil handling practices. The average reclamation cost was $0.08 per bank cubic yard of spoil placed in the near level bench on the mined point to $0.20 for spoil placed in the hollow fill. 2 references, 4 figures.

  5. A Semi-Preemptive Garbage Collector for Solid State Drives

    SciTech Connect (OSTI)

    Lee, Junghee; Kim, Youngjae; Shipman, Galen M; Oral, H Sarp; Wang, Feiyi; Kim, Jongman

    2011-01-01

    NAND flash memory is a preferred storage media for various platforms ranging from embedded systems to enterprise-scale systems. Flash devices do not have any mechanical moving parts and provide low-latency access. They also require less power compared to rotating media. Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governed by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptive GC scheme that can preempt on-going GC processing and service pending I/O requests in the queue. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semi-preemptive GC sheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptive GC scheme.

  6. Daytime turbulent exchange between the Amazon forest and the atmosphere

    SciTech Connect (OSTI)

    Fitzjarrald, D.R.; Moore, K.E. ); Cabral, M.R. ); Scolar, J. ); Manzi, A.O.; de Abreau Sa, L.D. )

    1990-09-20

    Detailed observations of turbulence just above and below the crown of the Amazon rain forest during the wet season are presented. The forest canopy is shown to remove high-frequency turbulent fluctuations while passing lower frequencies. Filter characteristics of turbulent transfer into the Amazon rain forest canopy are quantified. In spite of the ubiquitous presence of clouds and frequent rain during this season, the average horizontal wind speed spectrum and the relationship between the horizontal wind speed and its standard deviation are well described by dry convective boundary layer similarity hypotheses originally found to apply in flat terrain. Diurnal changes in the sign of the vertical velocity skewness observed above and inside the canopy are shown to be plausibly explained by considering the skewness budget. Simple empirical formulas that relate observed turbulent heat fluxes to horizontal wind speed and variance are presented. Changes in the amount of turbulent coupling between the forest and the boundary layer associated with deep convective clouds are presented in three case studies. Even small raining clouds are capable of evacuating the canopy of substances normally trapped by persistent static stability near the forest floor. Recovery from these events can take more than an hour, even during midday.

  7. Detection and Production of Methane Hydrate

    SciTech Connect (OSTI)

    George Hirasaki; Walter Chapman; Gerald Dickens; Colin Zelt; Brandon Dugan; Kishore Mohanty; Priyank Jaiswal

    2011-12-31

    This project seeks to understand regional differences in gas hydrate systems from the perspective of as an energy resource, geohazard, and long-term climate influence. Specifically, the effort will: (1) collect data and conceptual models that targets causes of gas hydrate variance, (2) construct numerical models that explain and predict regional-scale gas hydrate differences in 2-dimensions with minimal 'free parameters', (3) simulate hydrocarbon production from various gas hydrate systems to establish promising resource characteristics, (4) perturb different gas hydrate systems to assess potential impacts of hot fluids on seafloor stability and well stability, and (5) develop geophysical approaches that enable remote quantification of gas hydrate heterogeneities so that they can be characterized with minimal costly drilling. Our integrated program takes advantage of the fact that we have a close working team comprised of experts in distinct disciplines. The expected outcomes of this project are improved exploration and production technology for production of natural gas from methane hydrates and improved safety through understanding of seafloor and well bore stability in the presence of hydrates. The scope of this project was to more fully characterize, understand, and appreciate fundamental differences in the amount and distribution of gas hydrate and how this would affect the production potential of a hydrate accumulation in the marine environment. The effort combines existing information from locations in the ocean that are dominated by low permeability sediments with small amounts of high permeability sediments, one permafrost location where extensive hydrates exist in reservoir quality rocks and other locations deemed by mutual agreement of DOE and Rice to be appropriate. The initial ocean locations were Blake Ridge, Hydrate Ridge, Peru Margin and GOM. The permafrost location was Mallik. Although the ultimate goal of the project was to understand processes that control production potential of hydrates in marine settings, Mallik was included because of the extensive data collected in a producible hydrate accumulation. To date, such a location had not been studied in the oceanic environment. The project worked closely with ongoing projects (e.g. GOM JIP and offshore India) that are actively investigating potentially economic hydrate accumulations in marine settings. The overall approach was fivefold: (1) collect key data concerning hydrocarbon fluxes which is currently missing at all locations to be included in the study, (2) use this and existing data to build numerical models that can explain gas hydrate variance at all four locations, (3) simulate how natural gas could be produced from each location with different production strategies, (4) collect new sediment property data at these locations that are required for constraining fluxes, production simulations and assessing sediment stability, and (5) develop a method for remotely quantifying heterogeneities in gas hydrate and free gas distributions. While we generally restricted our efforts to the locations where key parameters can be measured or constrained, our ultimate aim was to make our efforts universally applicable to any hydrate accumulation.

  8. Simulating a Nationally Representative Housing Sample Using EnergyPlus

    SciTech Connect (OSTI)

    Hopkins, Asa S.; Lekov, Alex; Lutz, James; Rosenquist, Gregory; Gu, Lixing

    2011-03-04

    This report presents a new simulation tool under development at Lawrence Berkeley National Laboratory (LBNL). This tool uses EnergyPlus to simulate each single-family home in the Residential Energy Consumption Survey (RECS), and generates a calibrated, nationally representative set of simulated homes whose energy use is statistically indistinguishable from the energy use of the single-family homes in the RECS sample. This research builds upon earlier work by Ritchard et al. for the Gas Research Institute and Huang et al. for LBNL. A representative national sample allows us to evaluate the variance in energy use between individual homes, regions, or other subsamples; using this tool, we can also evaluate how that variance affects the impacts of potential policies. The RECS contains information regarding the construction and location of each sampled home, as well as its appliances and other energy-using equipment. We combined this data with the home simulation prototypes developed by Huang et al. to simulate homes that match the RECS sample wherever possible. Where data was not available, we used distributions, calibrated using the RECS energy use data. Each home was assigned a best-fit location for the purposes of weather and some construction characteristics. RECS provides some detail on the type and age of heating, ventilation, and air-conditioning (HVAC) equipment in each home; we developed EnergyPlus models capable of reproducing the variety of technologies and efficiencies represented in the national sample. This includes electric, gas, and oil furnaces, central and window air conditioners, central heat pumps, and baseboard heaters. We also developed a model of duct system performance, based on in-home measurements, and integrated this with fan performance to capture the energy use of single- and variable-speed furnace fans, as well as the interaction of duct and fan performance with the efficiency of heating and cooling equipment. Comparison with RECS revealed that EnergyPlus did not capture the heating-side behavior of heat pumps particularly accurately, and that our simple oil furnace and boiler models needed significant recalibration to fit with RECS. Simulating the full RECS sample on a single computer would take many hours, so we used the 'cloud computing' services provided by Amazon.com to simulate dozens of homes at once. This enabled us to simulate the full RECS sample, including multiple versions of each home to evaluate the impact of marginal changes, in less than 3 hours. Once the tool was calibrated, we were able to address several policy questions. We made a simple measurement of the heat replacement effect and showed that the net effect of heat replacement on primary energy use is likely to be less than 5%, relative to appliance-only measures of energy savings. Fuel switching could be significant, however. We also evaluated the national and regional impacts of a variety of 'overnight' changes in building characteristics or occupant behavior, including lighting, home insulation and sealing, HVAC system efficiency, and thermostat settings. For example, our model shows that the combination of increased home insulation and better sealed building shells could reduce residential natural gas use by 34.5% and electricity use by 6.5%, and a 1 degree rise in summer thermostat settings could save 2.1% of home electricity use. These results vary by region, and we present results for each U.S. Census division. We conclude by offering proposals for future work to improve the tool. Some proposed future work includes: comparing the simulated energy use data with the monthly RECS bill data; better capturing the variation in behavior between households, especially as it relates to occupancy and schedules; improving the characterization of recent construction and its regional variation; and extending the general framework of this simulation tool to capture multifamily housing units, such as apartment buildings.

  9. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report. October 1 - December 31, 2010.

    SciTech Connect (OSTI)

    Sisterson, D. L.

    2011-02-01

    Individual raw datastreams from instrumentation at the Atmospheric Radiation Measurement (ARM) Climate Research Facility fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ARM Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of processed data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual datastream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY2010 for the Southern Great Plains (SGP) site is 2097.60 hours (0.95 x 2208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1987.20 hours (0.90 x 2208) and for the Tropical Western Pacific (TWP) locale is 1876.80 hours (0.85 x 2208). The first ARM Mobile Facility (AMF1) deployment in Graciosa Island, the Azores, Portugal, continued through this quarter, so the OPSMAX time this quarter is 2097.60 hours (0.95 x 2208). The second ARM Mobile Facility (AMF2) began deployment this quarter to Steamboat Springs, Colorado. The experiment officially began November 15, but most of the instruments were up and running by November 1. Therefore, the OPSMAX time for the AMF2 was 1390.80 hours (.95 x 1464 hours) for November and December (61 days). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or datastream. Data availability reported here refers to the average of the individual, continuous datastreams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Summary. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2010, for the fixed sites. Because the AMFs operate episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. This first quarter comprises a total of 2,208 possible hours for the fixed sites and the AMF1 and 1,464 possible hours for the AMF2. The average of the fixed sites exceeded our goal this quarter. The AMF1 has essentially completed its mission and is shutting down to pack up for its next deployment to India. Although all the raw data from the operational instruments are in the Archive for the AMF2, only the processed data are tabulated. Approximately half of the AMF2 instruments have data that was fully processed, resulting in the 46% of all possible data made available to users through the Archive for this first quarter. Typically, raw data is not made available to users unless specifically requested.

  10. Atmospheric Radiation Measurement program climate research facility operations quarterly report July 1 - Sep. 30, 2009.

    SciTech Connect (OSTI)

    Sisterson, D. L.

    2009-10-15

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the fourth quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 ? 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 ? 2,208) and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 ? 2,208). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive result from downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period July 1 - September 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The fourth quarter comprises a total of 2,208 hours for the fixed and mobile sites. The average of the fixed sites well exceeded our goal this quarter. The AMF data statistic requires explanation. Since the AMF radar data ingest software is being modified, the data are being stored in the DMF for data processing. Hence, the data are not at the Archive; they are anticipated to become available by the next report.

  11. Atmospheric Radiation Measurement program climate research facility operations quarterly report January 1 - March 31, 2009.

    SciTech Connect (OSTI)

    Sisterson, D. L.

    2009-04-23

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the second quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,052.00 hours (0.95 x 2,160 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,944.00 hours (0.90 x 2,160), and for the Tropical Western Pacific (TWP) locale is 1,836.00 hours (0.85 x 2,160). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because not all of the metadata have been acquired that are used to generate this metric. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 90 days for this quarter) the instruments were operating this quarter. Summary. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period January 1 - March 31, 2009, for the fixed sites. The AMF has completed its mission in China but not all of the data can be released to the public at the time of this report. The second quarter comprises a total of 2,160 hours. The average exceeded our goal this quarter.

  12. Atmospheric Radiation Measurement program climate research facility operations quarterly report April 1 - June 30, 2007.

    SciTech Connect (OSTI)

    Sisterson, D. L.

    2007-07-26

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter of FY 2007 for the Southern Great Plains (SGP) site is 2,074.8 hours (0.95 x 2,184 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,965.6 hours (0.90 x 2,184), and that for the Tropical Western Pacific (TWP) locale is 1,856.4 hours (0.85 x 2,184). The OPSMAX time for the ARM Mobile Facility (AMF) is 2,074.8 hours (0.95 x 2,184). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period April 1 through June 30, 2007, for the fixed sites only. The AMF has been deployed to Germany and is operational this quarter. The third quarter comprises a total of 2,184 hours. Although the average exceeded our goal this quarter, there were cash flow issues resulting from Continuing Resolution early in the period that did not allow for timely instrument repairs that kept our statistics lower than past quarters at all sites. The low NSA numbers resulted from missing MFRSR data this spring that appears to be recoverable but not available at the Archive at the time of this report.

  13. Atmospheric Radiation Measurement program climate research facilities quarterly report April 1 - June 30, 2009.

    SciTech Connect (OSTI)

    Sisterson, D. L.

    2009-07-14

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,074.80 hours (0.95 x 2,184 hours this quarter); for the North Slope Alaska (NSA) locale it is 1,965.60 hours (0.90 x 2,184); and for the Tropical Western Pacific (TWP) locale it is 1,856.40 hours (0.85 x 2,184). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 1390.80 hours (0.95 x 1464). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for April 1 - June 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and are not included in the aggregate average with the fixed sites. The AMF statistics for this reporting period were not available at the time of this report. The third quarter comprises a total of 2,184 hours for the fixed sites. The average well exceeded our goal this quarter.

  14. Atmospheric Radiation Measurement program climate research facility operations quarterly report.

    SciTech Connect (OSTI)

    Sisterson, D. L.; Decision and Information Sciences

    2006-09-06

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year dating back to 1998. The U.S. Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1-(ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter for the Southern Great Plains (SGP) site is 2,074.80 hours (0.95 x 2,184 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,965.60 hours (0.90 x 2,184), and that for the Tropical Western Pacific (TWP) locale is 1,856.40 hours (0.85 x 2,184). The OPSMAX time for the ARM Mobile Facility (AMF) is 2,074.80 hours (0.95 x 2,184). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period April 1 through June 30, 2006, for the fixed and mobile sites. Although the AMF is currently up and running in Niamey, Niger, Africa, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The third quarter comprises a total of 2,184 hours. For all fixed sites (especially the TWP locale) and the AMF, the actual data availability (and therefore actual hours of operation) exceeded the individual (and well as aggregate average of the fixed sites) operational goal for the third quarter of fiscal year (FY) 2006.

  15. Robustness analysis of an air heating plant and control law by using polynomial chaos

    SciTech Connect (OSTI)

    Coln, Diego; Ferreira, Murillo A. S.; Bueno, tila M.; Balthazar, Jos M.; Rosa, Sulia S. R. F. de

    2014-12-10

    This paper presents a robustness analysis of an air heating plant with a multivariable closed-loop control law by using the polynomial chaos methodology (MPC). The plant consists of a PVC tube with a fan in the air input (that forces the air through the tube) and a mass flux sensor in the output. A heating resistance warms the air as it flows inside the tube, and a thermo-couple sensor measures the air temperature. The plant has thus two inputs (the fan's rotation intensity and heat generated by the resistance, both measured in percent of the maximum value) and two outputs (air temperature and air mass flux, also in percent of the maximal value). The mathematical model is obtained by System Identification techniques. The mass flux sensor, which is nonlinear, is linearized and the delays in the transfer functions are properly approximated by non-minimum phase transfer functions. The resulting model is transformed to a state-space model, which is used for control design purposes. The multivariable robust control design techniques used is the LQG/LTR, and the controllers are validated in simulation software and in the real plant. Finally, the MPC is applied by considering some of the system's parameters as random variables (one at a time, and the system's stochastic differential equations are solved by expanding the solution (a stochastic process) in an orthogonal basis of polynomial functions of the basic random variables. This method transforms the stochastic equations in a set of deterministic differential equations, which can be solved by traditional numerical methods (That is the MPC). Statistical data for the system (like expected values and variances) are then calculated. The effects of randomness in the parameters are evaluated in the open-loop and closed-loop pole's positions.

  16. 2-D Coda and Direct Wave Attenuation Tomography in Northern Italy

    SciTech Connect (OSTI)

    Morasca, P; Mayeda, K; Gok, R; Phillips, W S; Malagnini, L

    2007-10-17

    A 1-D coda method was proposed by Mayeda et al. (2003) in order to obtain stable seismic source moment-rate spectra using narrowband coda envelope measurements. That study took advantage of the averaging nature of coda waves to derive stable amplitude measurements taking into account all propagation, site, and Sto-coda transfer function effects. Recently this methodology was applied to micro earthquake data sets from three sub-regions of northern Italy (i.e., western Alps, northern Apennines and eastern Alps). Since the study regions were small, ranging between local-to-near-regional distances, the simple 1-D path assumptions used in the coda method worked very well. The lateral complexity of this region would suggest, however, that a 2-D path correction might provide even better results if the datasets were combined, especially when paths traverse larger distances and complicated regions. The structural heterogeneity of northern Italy makes the region ideal to test the extent to which coda variance can be reduced further by using a 2-D Q tomography technique. The approach we use has been developed by Phillips et al. (2005) and is an extension of previous amplitude ratio techniques to remove source effects from the inversion. The method requires some assumptions such as isotropic source radiation which is generally true for coda waves. Our results are compared against direct Swave inversions for 1/Q and results from both share very similar attenuation features that coincide with known geologic structures. We compare our results with those derived from direct waves as well as some recent results from northern California obtained by Mayeda et al. (2005) which tested the same tomographic methodology applied in this study to invert for 1/Q. We find that 2-D coda path corrections for this region significantly improve upon the 1-D corrections, in contrast to California where only a marginal improvement was observed. We attribute this difference to stronger lateral variations in Q for northern Italy relative to California.

  17. Multiple-point statistical prediction on fracture networks at Yucca Mountain

    SciTech Connect (OSTI)

    Liu, X.Y; Zhang, C.Y.; Liu, Q.S.; Birkholzer, J.T.

    2009-05-01

    In many underground nuclear waste repository systems, such as at Yucca Mountain, water flow rate and amount of water seepage into the waste emplacement drifts are mainly determined by hydrological properties of fracture network in the surrounding rock mass. Natural fracture network system is not easy to describe, especially with respect to its connectivity which is critically important for simulating the water flow field. In this paper, we introduced a new method for fracture network description and prediction, termed multi-point-statistics (MPS). The process of the MPS method is to record multiple-point statistics concerning the connectivity patterns of a fracture network from a known fracture map, and to reproduce multiple-scale training fracture patterns in a stochastic manner, implicitly and directly. It is applied to fracture data to study flow field behavior at the Yucca Mountain waste repository system. First, the MPS method is used to create a fracture network with an original fracture training image from Yucca Mountain dataset. After we adopt a harmonic and arithmetic average method to upscale the permeability to a coarse grid, THM simulation is carried out to study near-field water flow in the surrounding waste emplacement drifts. Our study shows that connectivity or patterns of fracture networks can be grasped and reconstructed by MPS methods. In theory, it will lead to better prediction of fracture system characteristics and flow behavior. Meanwhile, we can obtain variance from flow field, which gives us a way to quantify model uncertainty even in complicated coupled THM simulations. It indicates that MPS can potentially characterize and reconstruct natural fracture networks in a fractured rock mass with advantages of quantifying connectivity of fracture system and its simulation uncertainty simultaneously.

  18. A global warning for global warming

    SciTech Connect (OSTI)

    Paepe, R.

    1996-12-31

    The problem of global warming is a complex one not only because it is affecting desert areas such as the Sahel leading to famine disasters of poor rural societies, but because it is an even greater threat to modern well established industrial societies. Global warming is a complex problem of geographical, economical and societal factors together which definitely are biased by local environmental parameters. There is an absolute need to increase the knowledge of such parameters, especially to understand their limits of variance. The greenhouse effect is a global mechanism which means that in changing conditions at one point of the Earth, it will affect all other regions of the globe. Industrial pollution and devastation of the forest are quoted as similar polluting anthropogenic activities in far apart regions of the world with totally different societies and industrial compounds. The other important factor is climatic cyclicity which means that droughts are bound to natural cycles. These natural cycles are numerous as is reflected in the study of geo-proxydata from several sequential geological series on land, ice and deepsea. Each of these cycles reveals a drought cycle which occasionally interfere at the same time. It is believed that the present drought might well be a point of interference between the natural cycles of 2,500 and 1,000 years and the man induced cycle of the last century`s warming up. If the latter is the only cycle involved, man will be able to remediate. If not, global warming will become even more disastrous beyond the 21st century.

  19. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    SciTech Connect (OSTI)

    Madankan, R.; Pouget, S.; Singla, P.; Bursik, M.; Dehn, J.; Jones, M.; Patra, A.; Pavolonis, M.; Pitman, E.B.; Singh, T.; Webley, P.

    2014-08-15

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions height, profile of particle location, volcanic vent parameters are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This paper presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajkull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 1416 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.

  20. HIERARCHICAL STRUCTURE OF MAGNETOHYDRODYNAMIC TURBULENCE IN POSITION-POSITION-VELOCITY SPACE

    SciTech Connect (OSTI)

    Burkhart, Blakesley; Lazarian, A.; Goodman, Alyssa; Rosolowsky, Erik

    2013-06-20

    Magnetohydrodynamic turbulence is able to create hierarchical structures in the interstellar medium (ISM) that are correlated on a wide range of scales via the energy cascade. We use hierarchical tree diagrams known as dendrograms to characterize structures in synthetic position-position-velocity (PPV) emission cubes of isothermal magnetohydrodynamic turbulence. We show that the structures and degree of hierarchy observed in PPV space are related to the presence of self-gravity and the global sonic and Alfvenic Mach numbers. Simulations with higher Alfvenic Mach number, self-gravity and supersonic flows display enhanced hierarchical structure. We observe a strong dependency on the sonic and Alfvenic Mach numbers and self-gravity when we apply the statistical moments (i.e., mean, variance, skewness, kurtosis) to the leaf and node distribution of the dendrogram. Simulations with self-gravity, larger magnetic field and higher sonic Mach number have dendrogram distributions with higher statistical moments. Application of the dendrogram to three-dimensional density cubes, also known as position-position-position (PPP) cubes, reveals that the dominant emission contours in PPP and PPV are related for supersonic gas but not for subsonic. We also explore the effects of smoothing, thermal broadening, and velocity resolution on the dendrograms in order to make our study more applicable to observational data. These results all point to hierarchical tree diagrams as being a promising additional tool for studying ISM turbulence and star forming regions for obtaining information on the degree of self-gravity, the Mach numbers and the complicated relationship between PPV and PPP data.

  1. Smoothed particle hydrodynamics model for Landau-Lifshitz Navier-Stokes and advection-diffusion equations

    SciTech Connect (OSTI)

    Kordilla, Jannes; Pan, Wenxiao; Tartakovsky, Alexandre M.

    2014-12-14

    We propose a novel Smoothed Particle Hydrodynamics (SPH) discretization of the fully-coupled Landau-Lifshitz-Navier-Stokes (LLNS) and advection-diffusion equations. The accuracy of the SPH solution of the LLNS equations is demonstrated by comparing the scaling of velocity variance and self-diffusion coefficient with kinetic temperature and particle mass obtained from the SPH simulations and analytical solutions. The spatial covariance of pressure and velocity fluctuations are found to be in a good agreement with theoretical models. To validate the accuracy of the SPH method for the coupled LLNS and advection-diffusion equations, we simulate the interface between two miscible fluids. We study the formation of the so-called giant fluctuations of the front between light and heavy fluids with and without gravity, where the light fluid lays on the top of the heavy fluid. We find that the power spectra of the simulated concentration field is in good agreement with the experiments and analytical solutions. In the absence of gravity the the power spectra decays as the power -4 of the wave number except for small wave numbers which diverge from this power law behavior due to the effect of finite domain size. Gravity suppresses the fluctuations resulting in the much weaker dependence of the power spectra on the wave number. Finally the model is used to study the effect of thermal fluctuation on the Rayleigh-Taylor instability, an unstable dynamics of the front between a heavy fluid overlying a light fluid. The front dynamics is shown to agree well with the analytical solutions.

  2. End-of-life flows of multiple cycle consumer products

    SciTech Connect (OSTI)

    Tsiliyannis, C.A.

    2011-11-15

    Explicit expressions for the end-of-life flows (EOL) of single and multiple cycle products (MCPs) are presented, including deterministic and stochastic EOL exit. The expressions are given in terms of the physical parameters (maximum lifetime, T, annual cycling frequency, f, number of cycles, N, and early discard or usage loss). EOL flows are also obtained for hi-tech products, which are rapidly renewed and thus may not attain steady state (e.g. electronic products, passenger cars). A ten-step recursive procedure for obtaining the dynamic EOL flow evolution is proposed. Applications of the EOL expressions and the ten-step procedure are given for electric household appliances, industrial machinery, tyres, vehicles and buildings, both for deterministic and stochastic EOL exit, (normal, Weibull and uniform exit distributions). The effect of the physical parameters and the stochastic characteristics on the EOL flow is investigated in the examples: it is shown that the EOL flow profile is determined primarily by the early discard dynamics; it also depends strongly on longevity and cycling frequency: higher lifetime or early discard/loss imply lower dynamic and steady state EOL flows. The stochastic exit shapes the overall EOL dynamic profile: Under symmetric EOL exit distribution, as the variance of the distribution increases (uniform to normal to deterministic) the initial EOL flow rise becomes steeper but the steady state or maximum EOL flow level is lower. The steepest EOL flow profile, featuring the highest steady state or maximum level, as well, corresponds to skew, earlier shifted EOL exit (e.g. Weibull). Since the EOL flow of returned products consists the sink of the reuse/remanufacturing cycle (sink to recycle) the results may be used in closed loop product lifecycle management operations for scheduling and sizing reverse manufacturing and for planning recycle logistics. Decoupling and quantification of both the full age EOL and of the early discard flows is useful, the latter being the target of enacted legislation aiming at increasing reuse.

  3. PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices

    SciTech Connect (OSTI)

    Dunn, M.E.

    2000-06-01

    PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI.

  4. Investigating wind turbine impacts on near-wake flow using profiling Lidar data and large-eddy simulations with an actuator disk model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; Lundquist, Julie K.; Kosovic, Branko; Draxl, Caroline; Churchfield, Matthew J.

    2015-08-27

    In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of thesemore » changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σu, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σu , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.« less

  5. Industrial advanced turbine systems: Development and demonstration. Annual report, October 1, 1996--September 30, 1997

    SciTech Connect (OSTI)

    1997-12-31

    The US DOE has initiated a program for advanced turbine systems (ATS) that will serve industrial power generation markets. The ATS will provide ultra-high efficiency, environmental superiority, and cost competitiveness. The ATS will foster (1) early market penetration that enhances the global competitiveness of US industry, (2) public health benefits resulting from reduced exhaust gas emissions of target pollutants, (3) reduced cost of power used in the energy-intensive industrial marketplace and (4) the retention and expansion of the skilled US technology base required for the design, development and maintenance of state-of-the-art advanced turbine products. The Industrial ATS Development and Demonstration program is a multi-phased effort. Solar Turbines Incorporated (Solar) has participated in Phases 1 and 2 of the program. On September 14, 1995 Solar was awarded a Cooperative Agreement for Phases 3 and 4 of the program. Phase 3 of the work is separated into two subphases: Phase 3A entails Component Design and Development Phase 3B will involve Integrated Subsystem Testing. Phase 4 will cover Host Site Testing. Forecasts call for completion of the program within budget as originally estimated. Scheduled completion is forecasted to be approximately 3 years late to original plan. This delay has been intentionally planned in order to better match program tasks to the anticipated availability of DOE funds. To ensure the timely realization of DOE/Solar program goals, the development schedule for the smaller system (Mercury 50) and enabling technologies has been maintained, and commissioning of the field test unit is scheduled for May of 2000. As of the end of the reporting period work on the program is 22.80% complete based upon milestones completed. This measurement is considered quite conservative as numerous drawings on the Mercury 50 are near release. Variance information is provided in Section 4.0-Program Management.

  6. SU-E-T-602: Beryllium Seeds Implant for Photo-Neutron Yield Using External Beam Therapy

    SciTech Connect (OSTI)

    Koren, S; Veltchev, I; Furhang, E

    2014-06-01

    Purpose: To evaluate the Neutron yield obtained during prostate external beam irradiation. Methods: Neutrons, that are commonly a radiation safety concern for photon beams with energy above 10 MV, are induced inside a PTV from Beryllium implemented seeds. A high megavoltage photon beam delivered to a prostate will yield neutrons via the reaction Be-9(?,n)2?. Beryllium was chosen for its low gamma,n reaction cross-section threshold (1.67 MeV) to be combined with a high feasible 25 MV photon beam. This beam spectra has a most probable photon energy of 2.5 to 3.0 MeV and an average photon energy of about 5.8 MeV. For this feasibility study we simulated a Beryllium-made common seed dimension (0.1 cm diameter and 0.5 cm height) without taking into account encapsulation. We created a 0.5 cm grid loading pattern excluding the Urethra, using Variseed (Varian inc.) A total of 156 seeds were exported to a 4cm diameter prostate sphere, created in Fluka, a particle transport Monte Carlo Code. Two opposed 25 MV beams were simulated. The evaluation of the neutron dose was done by adjusting the simulated photon dose to a common prostate delivery (e.g. 7560 cGy in 42 fractions) and finding the corresponding neutron dose yield from the simulation. A variance reduction technique was conducted for the neutrons yield and transported. Results: An effective dose of 3.65 cGy due to neutrons was found in the prostate volume. The dose to central areas of the prostate was found to be about 10 cGy. Conclusion: The neutron dose yielded does not justify a clinical implant of Beryllium seeds. Nevertheless, one should investigate the Neutron dose obtained when a larger Beryllium loading is combined with commercially available 40 MeV Linacs.

  7. AnalyzeHOLE: An Integrated Wellbore Flow Analysis Tool

    SciTech Connect (OSTI)

    Keith J. Halford

    2009-10-01

    Conventional interpretation of flow logs assumes that hydraulic conductivity is directly proportional to flow change with depth. However, well construction can significantly alter the expected relation between changes in fluid velocity and hydraulic conductivity. Strong hydraulic conductivity contrasts between lithologic intervals can be masked in continuously screened wells. Alternating intervals of screen and blank casing also can greatly complicate the relation between flow and hydraulic properties. More permeable units are not necessarily associated with rapid fluid-velocity increases. Thin, highly permeable units can be misinterpreted as thick and less permeable intervals or not identified at all. These conditions compromise standard flow-log interpretation because vertical flow fields are induced near the wellbore. AnalyzeHOLE, an integrated wellbore analysis tool for simulating flow and transport in wells and aquifer systems, provides a better alternative for simulating and evaluating complex well-aquifer system interaction. A pumping well and adjacent aquifer system are simulated with an axisymmetric, radial geometry in a two-dimensional MODFLOW model. Hydraulic conductivities are distributed by depth and estimated with PEST by minimizing squared differences between simulated and measured flows and drawdowns. Hydraulic conductivity can vary within a lithology but variance is limited with regularization. Transmissivity of the simulated system also can be constrained to estimates from single-well, pumping tests. Water-quality changes in the pumping well are simulated with simple mixing models between zones of differing water quality. These zones are differentiated by backtracking thousands of particles from the well screens with MODPATH. An Excel spreadsheet is used to interface the various components of AnalyzeHOLE by (1) creating model input files, (2) executing MODFLOW, MODPATH, PEST, and supporting FORTRAN routines, and (3) importing and graphically displaying pertinent results.

  8. A NEW METHOD TO CORRECT FOR FIBER COLLISIONS IN GALAXY TWO-POINT STATISTICS

    SciTech Connect (OSTI)

    Guo Hong; Zehavi, Idit; Zheng Zheng

    2012-09-10

    In fiber-fed galaxy redshift surveys, the finite size of the fiber plugs prevents two fibers from being placed too close to one another, limiting the ability to study galaxy clustering on all scales. We present a new method for correcting such fiber collision effects in galaxy clustering statistics based on spectroscopic observations. The target galaxy sample is divided into two distinct populations according to the targeting algorithm of fiber placement, one free of fiber collisions and the other consisting of collided galaxies. The clustering statistics are a combination of the contributions from these two populations. Our method makes use of observations in tile overlap regions to measure the contributions from the collided population, and to therefore recover the full clustering statistics. The method is rooted in solid theoretical ground and is tested extensively on mock galaxy catalogs. We demonstrate that our method can well recover the projected and the full three-dimensional (3D) redshift-space two-point correlation functions (2PCFs) on scales both below and above the fiber collision scale, superior to the commonly used nearest neighbor and angular correction methods. We discuss potential systematic effects in our method. The statistical correction accuracy of our method is only limited by sample variance, which scales down with (the square root of) the volume probed. For a sample similar to the final SDSS-III BOSS galaxy sample, the statistical correction error is expected to be at the level of 1% on scales {approx}0.1-30 h {sup -1} Mpc for the 2PCFs. The systematic error only occurs on small scales, caused by imperfect correction of collision multiplets, and its magnitude is expected to be smaller than 5%. Our correction method, which can be generalized to other clustering statistics as well, enables more accurate measurements of full 3D galaxy clustering on all scales with galaxy redshift surveys.

  9. Toward a new parameterization of hydraulic conductivity in climate models: Simulation of rapid groundwater fluctuations in Northern California: HYDRAULIC CONDUCTIVITY IN CLIMATE MODELS

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vrettas, Michail D.; Fung, Inez Y.

    2015-12-01

    Preferential flow through weathered bedrock leads to rapid rise of the water table after the first rainstorms and significant water storage (also known as ‘‘rock moisture’’) in the fractures. We present a new parameterization of hydraulic conductivity that captures the preferential flow and is easy to implement in global climate models. To mimic the naturally varying heterogeneity with depth in the subsurface, the model represents the hydraulic conductivity as a product of the effective saturation and a background hydraulic conductivity Kbkg, drawn from a lognormal distribution. The mean of the background Kbkg decreases monotonically with depth, while its variance reducesmore » with the effective saturation. Model parameters are derived by assimilating into Richards’ equation 6 years of 30 min observations of precipitation (mm) and water table depths (m), from seven wells along a steep hillslope in the Eel River watershed in Northern California. The results show that the observed rapid penetration of precipitation and the fast rise of the water table from the well locations, after the first winter rains, are well captured with the new stochastic approach in contrast to the standard van Genuchten model of hydraulic conductivity, which requires significantly higher levels of saturated soils to produce the same results. ‘‘Rock moisture,’’ the moisture between the soil mantle and the water table, comprises 30% of the moisture because of the great depth of the weathered bedrock layer and could be a potential source of moisture to sustain trees through extended dry periods. Furthermore, storage of moisture in the soil mantle is smaller, implying less surface runoff and less evaporation, with the proposed new model.« less

  10. Waste Isolation Pilot Plant Safety Analysis Report

    SciTech Connect (OSTI)

    1995-11-01

    The following provides a summary of the specific issues addressed in this FY-95 Annual Update as they relate to the CH TRU safety bases: Executive Summary; Site Characteristics; Principal Design and Safety Criteria; Facility Design and Operation; Hazards and Accident Analysis; Derivation of Technical Safety Requirements; Radiological and Hazardous Material Protection; Institutional Programs; Quality Assurance; and Decontamination and Decommissioning. The System Design Descriptions`` (SDDS) for the WIPP were reviewed and incorporated into Chapter 3, Principal Design and Safety Criteria and Chapter 4, Facility Design and Operation. This provides the most currently available final engineering design information on waste emplacement operations throughout the disposal phase up to the point of permanent closure. Also, the criteria which define the TRU waste to be accepted for disposal at the WIPP facility were summarized in Chapter 3 based on the WAC for the Waste Isolation Pilot Plant.`` This Safety Analysis Report (SAR) documents the safety analyses that develop and evaluate the adequacy of the Waste Isolation Pilot Plant Contact-Handled Transuranic Wastes (WIPP CH TRU) safety bases necessary to ensure the safety of workers, the public and the environment from the hazards posed by WIPP waste handling and emplacement operations during the disposal phase and hazards associated with the decommissioning and decontamination phase. The analyses of the hazards associated with the long-term (10,000 year) disposal of TRU and TRU mixed waste, and demonstration of compliance with the requirements of 40 CFR 191, Subpart B and 40 CFR 268.6 will be addressed in detail in the WIPP Final Certification Application scheduled for submittal in October 1996 (40 CFR 191) and the No-Migration Variance Petition (40 CFR 268.6) scheduled for submittal in June 1996. Section 5.4, Long-Term Waste Isolation Assessment summarizes the current status of the assessment.

  11. Hydroacoustic Evaluation of Fish Passage Through Bonneville Dam in 2005

    SciTech Connect (OSTI)

    Ploskey, Gene R.; Weiland, Mark A.; Zimmerman, Shon A.; Hughes, James S.; Bouchard, Kyle E.; Fischer, Eric S.; Schilt, Carl R.; Hanks, Michael E.; Kim, Jina; Skalski, John R.; Hedgepeth, J.; Nagy, William T.

    2006-12-04

    The Portland District of the U.S. Army Corps of Engineers requested that the Pacific Northwest National Laboratory (PNNL) conduct fish-passage studies at Bonneville Dam in 2005. These studies support the Portland District's goal of maximizing fish-passage efficiency (FPE) and obtaining 95% survival for juvenile salmon passing Bonneville Dam. Major passage routes include 10 turbines and a sluiceway at Powerhouse 1 (B1), an 18-bay spillway, and eight turbines and a sluiceway at Powerhouse 2 (B2). In this report, we present results of two studies related to juvenile salmonid passage at Bonneville Dam. The studies were conducted between April 16 and July 15, 2005, encompassing most of the spring and summer migrations. Studies included evaluations of (1) Project fish passage efficiency and other major passage metrics, and (2) smolt approach and fate at B1 Sluiceway Outlet 3C from the B1 forebay. Some of the large appendices are only presented on the compact disk (CD) that accompanies the final report. Examples include six large comma-separated-variable (.CSV) files of hourly fish passage, hourly variances, and Project operations for spring and summer from Appendix E, and large Audio Video Interleave (AVI) files with DIDSON-movie clips of the area upstream of B1 Sluiceway Outlet 3C (Appendix H). Those video clips show smolts approaching the outlet, predators feeding on smolts, and vortices that sometimes entrained approaching smolts into turbines. The CD also includes Adobe Acrobat Portable Document Files (PDF) of the entire report and appendices.

  12. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    SciTech Connect (OSTI)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in doseresponse assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the hybrid method and relative deviation approach, we first evaluate six representative continuous doseresponse datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: We investigate to what extent the distribution assumption can affect BMD estimates. Both real data analysis and simulation study are conducted. BMDs estimated using hybrid method are more sensitive to distribution assumption. Summarized continuous data are adequate for BMD estimation.

  13. Diversity combining in laser Doppler vibrometry for improved signal reliability

    SciTech Connect (OSTI)

    Drbenstedt, Alexander

    2014-05-27

    Because of the speckle nature of the light reflected from rough surfaces the signal quality of a vibrometer suffers from varying signal power. Deep signal outages manifest themselves as noise bursts and spikes in the demodulated velocity signal. Here we show that the signal quality of a single point vibrometer can be substantially improved by diversity reception. This concept is widely used in RF communication and can be transferred into optical interferometry. When two statistically independent measurement channels are available which measure the same motion on the same spot, the probability for both channels to see a signal drop-out at the same time is very low. We built a prototype instrument that uses polarization diversity to constitute two independent reception channels that are separately demodulated into velocity signals. Send and receive beams go through different parts of the aperture so that the beams can be spatially separated. The two velocity channels are mixed into one more reliable signal by a PC program in real time with the help of the signal power information. An algorithm has been developed that ensures a mixing of two or more channels with minimum resulting variance. The combination algorithm delivers also an equivalent signal power for the combined signal. The combined signal lacks the vast majority of spikes that are present in the raw signals and it extracts the true vibration information present in both channels. A statistical analysis shows that the probability for deep signal outages is largely decreased. A 60 fold improvement can be shown. The reduction of spikes and noise bursts reduces the noise in the spectral analysis of vibrations too. Over certain frequency bands a reduction of the noise density by a factor above 10 can be shown.

  14. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect (OSTI)

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  15. An a priori DNS study of the shadow-position mixing model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; Haworth, Daniel C.; Pope, Stephen B.

    2016-01-15

    In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less

  16. Early Site Permit Demonstration Program: Guidelines for determining design basis ground motions. Volume 2, Appendices

    SciTech Connect (OSTI)

    Not Available

    1993-03-18

    This report develops and applies a methodology for estimating strong earthquake ground motion. The motivation was to develop a much needed tool for use in developing the seismic requirements for structural designs. An earthquake`s ground motion is a function of the earthquake`s magnitude, and the physical properties of the earth through which the seismic waves travel from the earthquake fault to the site of interest. The emphasis of this study is on ground motion estimation in Eastern North America (east of the Rocky Mountains), with particular emphasis on the Eastern United States and southeastern Canada. Eastern North America is a stable continental region, having sparse earthquake activity with rare occurrences of large earthquakes. While large earthquakes are of interest for assessing seismic hazard, little data exists from the region to empirically quantify their effects. The focus of the report is on the attributes of ground motion in Eastern North America that are of interest for the design of facilities such as nuclear power plants. This document, Volume II, contains Appendices 2, 3, 5, 6, and 7 covering the following topics: Eastern North American Empirical Ground Motion Data; Examination of Variance of Seismographic Network Data; Soil Amplification and Vertical-to-Horizontal Ratios from Analysis of Strong Motion Data From Active Tectonic Regions; Revision and Calibration of Ou and Herrmann Method; Generalized Ray Procedure for Modeling Ground Motion Attenuation; Crustal Models for Velocity Regionalization; Depth Distribution Models; Development of Generic Site Effects Model; Validation and Comparison of One-Dimensional Site Response Methodologies; Plots of Amplification Factors; Assessment of Coupling Between Vertical & Horizontal Motions in Nonlinear Site Response Analysis; and Modeling of Dynamic Soil Properties.

  17. Species interactions differ in their genetic robustness

    SciTech Connect (OSTI)

    Chubiz, Lon M.; Granger, Brian R.; Segre, Daniel; Harcombe, William R.

    2015-04-14

    Conflict and cooperation between bacterial species drive the composition and function of microbial communities. Stability of these emergent properties will be influenced by the degree to which species' interactions are robust to genetic perturbations. We use genome-scale metabolic modeling to computationally analyze the impact of genetic changes when Escherichia coli and Salmonella enterica compete, or cooperate. We systematically knocked out in silico each reaction in the metabolic network of E. coli to construct all 2583 mutant stoichiometric models. Then, using a recently developed multi-scale computational framework, we simulated the growth of each mutant E. coli in the presence of S. enterica. The type of interaction between species was set by modulating the initial metabolites present in the environment. We found that the community was most robust to genetic perturbations when the organisms were cooperating. Species ratios were more stable in the cooperative community, and community biomass had equal variance in the two contexts. Additionally, the number of mutations that have a substantial effect is lower when the species cooperate than when they are competing. In contrast, when mutations were added to the S. enterica network the system was more robust when the bacteria were competing. These results highlight the utility of connecting metabolic mechanisms and studies of ecological stability. Cooperation and conflict alter the connection between genetic changes and properties that emerge at higher levels of biological organization.

  18. Differentiation of Microbial Species and Strains in Coculture Biofilms by Multivariate Analysis of Laser Desorption Postionization Mass Spectra

    SciTech Connect (OSTI)

    University of Illinois at Chicago; Montana State University; Bhardwaj, Chhavi; Cui, Yang; Hofstetter, Theresa; Liu, Suet Yi; Bernstein, Hans C.; Carlson, Ross P.; Ahmed, Musahid; Hanley, Luke

    2013-04-01

    7.87 to 10.5 eV vacuum ultraviolet (VUV) photon energies were used in laser desorption postionization mass spectrometry (LDPI-MS) to analyze biofilms comprised of binary cultures of interacting microorganisms. The effect of photon energy was examined using both tunable synchrotron and laser sources of VUV radiation. Principal components analysis (PCA) was applied to the MS data to differentiate species in Escherichia coli-Saccharomyces cerevisiae coculture biofilms. PCA of LDPI-MS also differentiated individual E. coli strains in a biofilm comprised of two interacting gene deletion strains, even though these strains differed from the wild type K-12 strain by no more than four gene deletions each out of approximately 2000 genes. PCA treatment of 7.87 eV LDPI-MS data separated the E. coli strains into three distinct groups two ?pure? groups and a mixed region. Furthermore, the ?pure? regions of the E. coli cocultures showed greater variance by PCA when analyzed by 7.87 eV photon energies than by 10.5 eV radiation. Comparison of the 7.87 and 10.5 eV data is consistent with the expectation that the lower photon energy selects a subset of low ionization energy analytes while 10.5 eV is more inclusive, detecting a wider range of analytes. These two VUV photon energies therefore give different spreads via PCA and their respective use in LDPI-MS constitute an additional experimental parameter to differentiate strains and species.

  19. Uncertainty quantification for evaluating impacts of caprock and reservoir properties on pressure buildup and ground surface displacement during geological CO2 sequestration

    SciTech Connect (OSTI)

    Bao, Jie; Hou, Zhangshuan; Fang, Yilin; Ren, Huiying; Lin, Guang

    2013-08-12

    A series of numerical test cases reflecting broad and realistic ranges of geological formation properties was developed to systematically evaluate and compare the impacts of those properties on geomechanical responses to CO2 injection. A coupled hydro-geomechanical subsurface transport simulator, STOMP (Subsurface Transport over Multiple Phases), was adopted to simulate the CO2 migration process and geomechanical behaviors of the surrounding geological formations. A quasi-Monte Carlo sampling method was applied to efficiently sample a high-dimensional parameter space consisting of injection rate and 14 subsurface formation properties, including porosity, permeability, entry pressure, irreducible gas and aqueous saturation, Youngs modulus, and Poissons ratio for both reservoir and caprock. Generalized cross-validation and analysis of variance methods were used to quantitatively measure the significance of the 15 input parameters. Reservoir porosity, permeability, and injection rate were found to be among the most significant factors affecting the geomechanical responses to the CO2 injection. We used a quadrature generalized linear model to build a reduced-order model that can estimate the geomechanical response instantly instead of running computationally expensive numerical simulations. The injection pressure and ground surface displacement are often monitored for injection well safety, and are believed can partially reflect the risk of fault reactivation and seismicity. Based on the reduced order model and response surface, the input parameters can be screened for control the risk of induced seismicity. The uncertainty of the subsurface structure properties cause the numerical simulation based on a single or a few samples does not accurately estimate the geomechanical response in the actual injection site. Probability of risk can be used to evaluate and predict the risk of injection when there are great uncertainty in the subsurface properties and operation conditions.

  20. Improved flywheel materials : characterization of nanofiber modified flywheel test specimen.

    SciTech Connect (OSTI)

    Boyle, Timothy J.; Bell, Nelson Simmons; Ehlen, Mark Andrew; Anderson, Benjamin John; Miller, William Kenneth

    2013-09-01

    As alternative energy generating devices (i.e., solar, wind, etc) are added onto the electrical energy grid (AC grid), irregularities in the available electricity due to natural occurrences (i.e., clouds reducing solar input or wind burst increasing wind powered turbines) will be dramatically increased. Due to their almost instantaneous response, modern flywheel-based energy storage devices can act a mechanical mechanism to regulate the AC grid; however, improved spin speeds will be required to meet the necessary energy levels to balance thesegreen' energy variances. Focusing on composite flywheels, we have investigated methods for improving the spin speeds based on materials needs. The so-called composite flywheels are composed of carbon fiber (C-fiber), glass fiber, and aglue' (resin) to hold them together. For this effort, we have focused on the addition of fillers to the resin in order to improve its properties. Based on the high loads required for standard meso-sized fillers, this project investigated the utility of ceramic nanofillers since they can be added at very low load levels due to their high surface area. The impact that TiO2 nanowires had on the final strength of the flywheel material was determined by athree-point-bend' test. The results of the introduction of nanomaterials demonstrated an increase instrength' of the flywheel's C-fiber-resin moiety, with an upper limit of a 30% increase being reported. An analysis of the economic impact concerning the utilization of the nanowires was undertaken and after accounting for new-technology and additional production costs, return on improved-nanocomposite investment was approximated at 4-6% per year over the 20-year expected service life. Further, it was determined based on the 30% improvement in strength, this change may enable a 20-30% reduction in flywheel energy storage cost (%24/kW-h).

  1. Predicting Individual Fuel Economy

    SciTech Connect (OSTI)

    Lin, Zhenhong; Greene, David L

    2011-01-01

    To make informed decisions about travel and vehicle purchase, consumers need unbiased and accurate information of the fuel economy they will actually obtain. In the past, the EPA fuel economy estimates based on its 1984 rules have been widely criticized for overestimating on-road fuel economy. In 2008, EPA adopted a new estimation rule. This study compares the usefulness of the EPA's 1984 and 2008 estimates based on their prediction bias and accuracy and attempts to improve the prediction of on-road fuel economies based on consumer and vehicle attributes. We examine the usefulness of the EPA fuel economy estimates using a large sample of self-reported on-road fuel economy data and develop an Individualized Model for more accurately predicting an individual driver's on-road fuel economy based on easily determined vehicle and driver attributes. Accuracy rather than bias appears to have limited the usefulness of the EPA 1984 estimates in predicting on-road MPG. The EPA 2008 estimates appear to be equally inaccurate and substantially more biased relative to the self-reported data. Furthermore, the 2008 estimates exhibit an underestimation bias that increases with increasing fuel economy, suggesting that the new numbers will tend to underestimate the real-world benefits of fuel economy and emissions standards. By including several simple driver and vehicle attributes, the Individualized Model reduces the unexplained variance by over 55% and the standard error by 33% based on an independent test sample. The additional explanatory variables can be easily provided by the individuals.

  2. Sensitivity of MJO to the CAPE lapse time in the NCAR CAM3

    SciTech Connect (OSTI)

    LIU, P.; Wang, B.; Meehl, Gerald, A.

    2007-09-05

    Weak and irregular boreal winter MJO in the NCAR CAM3 corresponds to very low CAPE background, which is caused by easy-to-occur and over-dominant deep convection indicating the deep convective scheme uses either too low CAPE threshold as triggering function or too large consumption rate of CAPE to close the scheme. Raising the CAPE threshold from default 70 J/kg to ten times large only enhances the CAPE background while fails to noticeably improve the wind mean state and the MJO. However, lengthening the CAPE lapse time from one to eight hours significantly improved the background in CAPE and winds, and salient features of the MJO. Variances, dominant periods and zonal wave numbers, power spectra and coherent propagating structure in winds and convection associated with MJO are ameliorated and comparable to the observations. Lengthening the CAPE lapse time to eight hours reduces dramatically the cloud base mass flux, which prevents effectively the deep convection from occurring prematurely. In this case, partitioning of deep to shallow convection in MJO active area is about 5:4.5 compared to over 9:0.5 in the control run. Latent heat is significantly enhanced below 600 hPa over the central Indian Ocean and the western Pacific. Such partitioning of deep and shallow convection is argued necessary for simulating realistic MJO features. Although the universal eight hours lies in the upper limit of that required by the quasi-equilibrium theory, a local CAPE lapse time for the parameterized cumulus convection will be more realistic.

  3. Kalman-filtered compressive sensing for high resolution estimation of anthropogenic greenhouse gas emissions from sparse measurements.

    SciTech Connect (OSTI)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew

    2013-09-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions which can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.

  4. Towards an Optimal Gradient-dependent Energy Functional of the PZ-SIC Form

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jónsson, Elvar Örn; Lehtola, Susi; Jónsson, Hannes

    2015-06-01

    Results of Perdew–Zunger self-interaction corrected (PZ-SIC) density functional theory calculations of the atomization energy of 35 molecules are compared to those of high-level quantum chemistry calculations. While the PBE functional, which is commonly used in calculations of condensed matter, is known to predict on average too high atomization energy (overbinding of the molecules), the application of PZ-SIC gives a large overcorrection and leads to significant underestimation of the atomization energy. The exchange enhancement factor that is optimal for the generalized gradient approximation within the Kohn-Sham (KS) approach may not be optimal for the self-interaction corrected functional. The PBEsol functional, wheremore » the exchange enhancement factor was optimized for solids, gives poor results for molecules in KS but turns out to work better than PBE in PZ-SIC calculations. The exchange enhancement is weaker in PBEsol and the functional is closer to the local density approximation. Furthermore, the drop in the exchange enhancement factor for increasing reduced gradient in the PW91 functional gives more accurate results than the plateaued enhancement in the PBE functional. A step towards an optimal exchange enhancement factor for a gradient dependent functional of the PZ-SIC form is taken by constructing an exchange enhancement factor that mimics PBEsol for small values of the reduced gradient, and PW91 for large values. The average atomization energy is then in closer agreement with the high-level quantum chemistry calculations, but the variance is still large, the F2 molecule being a notable outlier.« less

  5. Non-Gaussianity and CMB aberration and Doppler

    SciTech Connect (OSTI)

    Catena, Riccardo; Liguori, Michele; Renzi, Alessandro; Notari, Alessio E-mail: michele.liguori@pd.infn.it E-mail: arenzi@pd.infn.it

    2013-09-01

    The peculiar motion of an observer with respect to the CMB rest frame induces a deflection in the arrival direction of the observed photons (also known as CMB aberration) and a Doppler shift in the measured photon frequencies. As a consequence, aberration and Doppler effects induce non trivial correlations between the harmonic coefficients of the observed CMB temperature maps. In this paper we investigate whether these correlations generate a bias on non-Gaussianity estimators f{sub NL}. We perform this analysis simulating a large number of temperature maps with Planck-like resolution (lmax = 2000) as different realizations of the same cosmological fiducial model (WMAP7yr). We then add to these maps aberration and Doppler effects employing a modified version of the HEALPix code. We finally evaluate a generalization of the Komatsu, Spergel and Wandelt non-Gaussianity estimator for all the simulated maps, both when peculiar velocity effects have been considered and when these phenomena have been neglected. Using the value v/c = 1.23 × 10{sup −3} for our peculiar velocity, we found that the aberration/Doppler induced non-Gaussian signal is at most of about half of the cosmic variance σ for f{sub NL} both in a full-sky and in a cut-sky experimental configuration, for local, equilateral and orthogonal estimators. We conclude therefore that when estimating f{sub NL} it is safe to ignore aberration and Doppler effects if the primordial map is already Gaussian. More work is necessary however to assess whether a map which contains non-Gaussianity can be significantly distorted by a peculiar velocity.

  6. Proposed first-generation WSQ bit allocation procedure

    SciTech Connect (OSTI)

    Bradley, J.N.; Brislawn, C.M.

    1993-09-08

    The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.

  7. To study of different level of nitrogen manure and density on yield and yield component of variety of K.S.C 704 in dry region of sistan

    SciTech Connect (OSTI)

    Dahmardeh, M.; Forghani, F.; Khammari, E.

    2008-01-30

    Out of three grain of the world, Corn is one of the best, About 7 to 10 thousand years ago in south of Mexico corn become domesticated. In the year 1995 culfivation of corn in the world was 130 mil/ha, and to Total production of the world of corn is 507 M/Tons. Average yield of corn in the year 1995 Among Producer countries was 7.78 To 7.60 t/ha in fance and united state was state was 2.36 To 2.20 t/ha, but in Brazil and Mexico Production of corn was different. With this regards, special manner has been arranged for the suitable cultivation or suitable density plants in one heactar on cultivation variety of K.S.C 704 corn. Also suitable level of Nitrogen manure, this Protect in climatic condition of Sistan region done, sith complete block design with 3 replication. Experiment has been selected as split plot, the main plot with 4 different concentration level such as (200-250-3500 and 350 Kg/ha) and sub plot density with 3 different level such as 111000,83000 and 66000 plan/ha respectively. From stage growth up to harvesting of corn in this reache having Data for each treat. ment, After harvesting Analysis of variance and companion of Average of each treatment has been done by DunKan method. Results has been shown, Measurment of characteristics (yield component) seed yield effected different density level of manure, with increasing of manure weight of one thousand seed yield and also in high density showed high significant differente amoung each other. These are with suitable climatic condition of sistan region if enough water will be available ed using Amount of 350 ks/ha Nitrogen manure and with density 111000 plants/ha we can product suitable seed yield Biological yield.

  8. Maximum Diameter Measurements of Aortic Aneurysms on Axial CT Images After Endovascular Aneurysm Repair: Sufficient for Follow-up?

    SciTech Connect (OSTI)

    Baumueller, Stephan Nguyen, Thi Dan Linh Goetti, Robert Paul; Lachat, Mario; Seifert, Burkhardt; Pfammatter, Thomas Frauenfelder, Thomas

    2011-12-15

    Purpose: To assess the accuracy of maximum diameter measurements of aortic aneurysms after endovascular aneurysm repair (EVAR) on axial computed tomographic (CT) images in comparison to maximum diameter measurements perpendicular to the intravascular centerline for follow-up by using three-dimensional (3D) volume measurements as the reference standard. Materials and Methods: Forty-nine consecutive patients (73 {+-} 7.5 years, range 51-88 years), who underwent EVAR of an infrarenal aortic aneurysm were retrospectively included. Two blinded readers twice independently measured the maximum aneurysm diameter on axial CT images performed at discharge, and at 1 and 2 years after intervention. The maximum diameter perpendicular to the centerline was automatically measured. Volumes of the aortic aneurysms were calculated by dedicated semiautomated 3D segmentation software (3surgery, 3mensio, the Netherlands). Changes in diameter of 0.5 cm and in volume of 10% were considered clinically significant. Intra- and interobserver agreements were calculated by intraclass correlations (ICC) in a random effects analysis of variance. The two unidimensional measurement methods were correlated to the reference standard. Results: Intra- and interobserver agreements for maximum aneurysm diameter measurements were excellent (ICC = 0.98 and ICC = 0.96, respectively). There was an excellent correlation between maximum aneurysm diameters measured on axial CT images and 3D volume measurements (r = 0.93, P < 0.001) as well as between maximum diameter measurements perpendicular to the centerline and 3D volume measurements (r = 0.93, P < 0.001). Conclusion: Measurements of maximum aneurysm diameters on axial CT images are an accurate, reliable, and robust method for follow-up after EVAR and can be used in daily routine.

  9. Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code

    SciTech Connect (OSTI)

    Barenghi, S.; Boarin, S.; Ricotti, M. E.

    2012-07-01

    Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment risk arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)

  10. Submicron particle mass concentrations and sources in the Amazonian wet season (AMAZE-08)

    SciTech Connect (OSTI)

    Chen, Q.; Farmer, D. K.; Rizzo, L. V.; Pauliqueivis, T.; Kuwata, Mikinori; Karl, Thomas G.; Guenther, Alex B.; Allan, James D.; Coe, H.; Andreae, M. O.; Poeschl, U.; Jiminez, J. L.; Artaxo, Paulo; Martin, Scot T.

    2015-01-01

    Real-time mass spectra of non-refractory component of submicron aerosol particles were recorded in a tropical rainforest in the central Amazon basin during the wet season of 2008, as a part of the Amazonian Aerosol Characterization Experiment (AMAZE-08). Organic components accounted on average for more than 80% of the non-refractory submicron particle mass concentrations during the period of measurements. Ammonium was present in sufficient quantities to halfway neutralize sulfate. In this acidic, isoprene-dominated, low-NOx environment the high-resolution mass spectra as well as mass closures with ion chromatography measurements did not provide evidence for significant contributions of organosulfate species, at least at concentrations above uncertainty levels. Positive-matrix factorization of the time series of particle mass spectra identified four statistical factors to account for the variance of the signal intensities of the organic constituents: a factor HOA having a hydrocarbon-like signature and identified as regional emissions of primary organic material, a factor OOA-1 associated with fresh production of secondary organic material by a mechanism of BVOC oxidation followed by gas-to-particle conversion, a factor OOA-2 consistent with reactive uptake of isoprene oxidation products, especially epoxydiols by acidic particles, and a factor OOA-3 associated with long range transport and atmospheric aging. The OOA-1, -2, and -3 factors had progressively more oxidized signatures. Diameter-resolved mass spectral markers also suggested enhanced reactive uptake of isoprene oxidation products to the accumulation mode for the OOA-2 factor, and such size partitioning can be indicative of in-cloud process. The campaign-average factor loadings were in a ratio of 1.1:1.0 for the OOA-1 compared to the OOA-2 pathway, suggesting the comparable importance of gas-phase compared to particle-phase (including cloud waters) production pathways of secondary organic material during the study period.

  11. Unmanned airborne vehicle (UAV): Flight testing and evaluation of two-channel E-field very low frequency (VLF) instrument

    SciTech Connect (OSTI)

    1998-12-01

    Using VLF frequencies, transmitted by the Navy`s network, for airborne remote sensing of the earth`s electrical, magnetic characteristics was first considered by the United States Geological Survey (USGS) around the mid 1970s. The first VLF system was designed and developed by the USGS for installation and operation on a single engine, fixed wing aircraft used by the Branch of Geophysics for geophysical surveying. The system consisted of five channels. Two E-field channels with sensors consisting of a fixed vertical loaded dipole antenna with pre-amp mounted on top of the fuselage and a gyro stabilized horizontal loaded dipole antenna with pre-amp mounted on a tail boom. The three channel magnetic sensor consisted of three orthogonal coils mounted on the same gyro stabilized platform as the horizontal E-field antenna. The main features of the VLF receiver were: narrow band-width frequency selection using crystal filters, phase shifters for zeroing out system phase variances, phase-lock loops for generating real and quadrature gates, and synchronous detectors for generating real and quadrature outputs. In the mid 1990s the Branch of Geophysics designed and developed a two-channel E-field ground portable VLF system. The system was built using state-of-the-art circuit components and new concepts in circuit architecture. Small size, light weight, low power, durability, and reliability were key considerations in the design of the instrument. The primary purpose of the instrument was for collecting VLF data during ground surveys over small grid areas. Later the system was modified for installation on a Unmanned Airborne Vehicle (UAV). A series of three field trips were made to Easton, Maryland for testing and evaluating the system performance.

  12. The U. S. transportation sector in the year 2030: results of a two-part Delphi survey.

    SciTech Connect (OSTI)

    Morrison, G.; Stephens, T.S.

    2011-10-11

    A two-part Delphi Survey was given to transportation experts attending the Asilomar Conference on Transportation and Energy in August, 2011. The survey asked respondents about trends in the US transportation sector in 2030. Topics included: alternative vehicles, high speed rail construction, rail freight transportation, average vehicle miles traveled, truck versus passenger car shares, vehicle fuel economy, and biofuels in different modes. The survey consisted of two rounds -- both asked the same set of seven questions. In the first round, respondents were given a short introductory paragraph about the topic and asked to use their own judgment in their responses. In the second round, the respondents were asked the same questions, but were also given results from the first round as guidance. The survey was sponsored by Argonne National Lab (ANL), the National Renewable Energy Lab (NREL), and implemented by University of California at Davis, Institute of Transportation Studies. The survey was part of the larger Transportation Energy Futures (TEF) project run by the Department of Energy, Office of Energy Efficiency and Renewable Energy. Of the 206 invitation letters sent, 94 answered all questions in the first round (105 answered at least one question), and 23 of those answered all questions in the second round. 10 of the 23 second round responses were at a discussion section at Asilomar, while the remaining were online. Means and standard deviations of responses from Round One and Two are given in Table 1 below. One main purpose of Delphi surveys is to reduce the variance in opinions through successive rounds of questioning. As shown in Table 1, the standard deviations of 25 of the 30 individual sub-questions decreased between Round One and Round Two, but the decrease was slight in most cases.

  13. A multifactor analysis of fungal and bacterial community structure of the root microbiome of mature Populus deltoides trees

    SciTech Connect (OSTI)

    Shakya, Migun; Gottel, Neil R; Castro Gonzalez, Hector F; Yang, Zamin; Gunter, Lee E; Labbe, Jessy L; Muchero, Wellington; Bonito, Gregory; Vilgalys, Rytas; Tuskan, Gerald A; Podar, Mircea; Schadt, Christopher Warren

    2013-01-01

    Bacterial and fungal communities associated with plant roots are central to the host- health, survival and growth. However, a robust understanding of root-microbiome and the factors that drive host associated microbial community structure have remained elusive, especially in mature perennial plants from natural settings. Here, we investigated relationships of bacterial and fungal communities in the rhizosphere and root endosphere of the riparian tree species Populus deltoides, and the influence of soil parameters, environmental properties (host phenotype and aboveground environmental settings), host plant genotype (Simple Sequence Repeat (SSR) markers), season (Spring vs. Fall) and geographic setting (at scales from regional watersheds to local riparian zones) on microbial community structure. Each of the trees sampled displayed unique aspects to it s associated community structure with high numbers of Operational Taxonomic Units (OTUs) specific to an individual trees (bacteria >90%, fungi >60%). Over the diverse conditions surveyed only a small number of OTUs were common to all samples within rhizosphere (35 bacterial and 4 fungal) and endosphere (1 bacterial and 1 fungal) microbiomes. As expected, Proteobacteria and Ascomycota were dominant in root communities (>50%) while other higher-level phylogenetic groups (Chytridiomycota, Acidobacteria) displayed greatly reduced abundance in endosphere compared to the rhizosphere. Variance partitioning partially explained differences in microbiome composition between all sampled roots on the basis of seasonal and soil properties (4% to 23%). While most variation remains unattributed, we observed significant differences in the microbiota between watersheds (Tennessee vs. North Carolina) and seasons (Spring vs. Fall). SSR markers clearly delineated two host populations associated with the samples taken in TN vs. NC, but overall genotypic distances did not have a significant effect on corresponding communities that could be separated from other measured effects.

  14. Effect of process variables on the density and durability of the pellets made from high moisture corn stover

    SciTech Connect (OSTI)

    Jaya Shankar Tumuluru

    2014-03-01

    A flat die pellet mill was used to understand the effect of high levels of feedstock moisture content in the range of 2838% (w.b.), with die rotational speeds of 4060 Hz, and preheating temperatures of 30110 C on the pelleting characteristics of 4.8 mm screen size ground corn stover using an 8 mm pellet die. The physical properties of the pelletised biomass studied are: (a) pellet moisture content, (b) unit, bulk and tapped density, and (c) durability. Pelletisation experiments were conducted based on central composite design. Analysis of variance (ANOVA) indicated that feedstock moisture content influenced all of the physical properties at P < 0.001. Pellet moisture content decreased with increase in preheating temperature to about 110 C and decreasing the feedstock moisture content to about 28% (w.b.). Response surface models developed for quality attributes with respect to process variables has adequately described the process with coefficient of determination (R2) values of >0.88. The other pellet quality attributes such as unit, bulk, tapped density, were maximised at feedstock moisture content of 3033% (w.b.), die speeds of >50 Hz and preheating temperature of >90 C. In case of durability a medium moisture content of 3334% (w.b.) and preheating temperatures of >70 C and higher die speeds >50 Hz resulted in high durable pellets. It can be concluded from the present study that feedstock moisture content, followed by preheating, and die rotational speed are the interacting process variables influencing pellet moisture content, unit, bulk and tapped density and durability.

  15. Towards an Optimal Gradient-dependent Energy Functional of the PZ-SIC Form

    SciTech Connect (OSTI)

    Jnsson, Elvar rn; Lehtola, Susi; Jnsson, Hannes

    2015-06-01

    Results of PerdewZunger self-interaction corrected (PZ-SIC) density functional theory calculations of the atomization energy of 35 molecules are compared to those of high-level quantum chemistry calculations. While the PBE functional, which is commonly used in calculations of condensed matter, is known to predict on average too high atomization energy (overbinding of the molecules), the application of PZ-SIC gives a large overcorrection and leads to significant underestimation of the atomization energy. The exchange enhancement factor that is optimal for the generalized gradient approximation within the Kohn-Sham (KS) approach may not be optimal for the self-interaction corrected functional. The PBEsol functional, where the exchange enhancement factor was optimized for solids, gives poor results for molecules in KS but turns out to work better than PBE in PZ-SIC calculations. The exchange enhancement is weaker in PBEsol and the functional is closer to the local density approximation. Furthermore, the drop in the exchange enhancement factor for increasing reduced gradient in the PW91 functional gives more accurate results than the plateaued enhancement in the PBE functional. A step towards an optimal exchange enhancement factor for a gradient dependent functional of the PZ-SIC form is taken by constructing an exchange enhancement factor that mimics PBEsol for small values of the reduced gradient, and PW91 for large values. The average atomization energy is then in closer agreement with the high-level quantum chemistry calculations, but the variance is still large, the F2 molecule being a notable outlier.

  16. Noise suppression in reconstruction of low-Z target megavoltage cone-beam CT images

    SciTech Connect (OSTI)

    Wang Jing; Robar, James; Guan Huaiqun

    2012-08-15

    Purpose: To improve the image contrast-to-noise (CNR) ratio for low-Z target megavoltage cone-beam CT (MV CBCT) using a statistical projection noise suppression algorithm based on the penalized weighted least-squares (PWLS) criterion. Methods: Projection images of a contrast phantom, a CatPhan{sup Registered-Sign} 600 phantom and a head phantom were acquired by a Varian 2100EX LINAC with a low-Z (Al) target and low energy x-ray beam (2.5 MeV) at a low-dose level and at a high-dose level. The projections were then processed by minimizing the PWLS objective function. The weighted least square (WLS) term models the noise of measured projection and the penalty term enforces the smoothing constraints of the projection image. The variance of projection data was chosen as the weight for the PWLS objective function and it determined the contribution of each measurement. An anisotropic quadratic form penalty that incorporates the gradient information of projection image was used to preserve edges during noise reduction. Low-Z target MV CBCT images were reconstructed by the FDK algorithm after each projection was processed by the PWLS smoothing. Results: Noise in low-Z target MV CBCT images were greatly suppressed after the PWLS projection smoothing, without noticeable sacrifice of the spatial resolution. Depending on the choice of smoothing parameter, the CNR of selected regions of interest in the PWLS processed low-dose low-Z target MV CBCT image can be higher than the corresponding high-dose image.Conclusion: The CNR of low-Z target MV CBCT images was substantially improved by using PWLS projection smoothing. The PWLS projection smoothing algorithm allows the reconstruction of high contrast low-Z target MV CBCT image with a total dose of as low as 2.3 cGy.

  17. PRIMUS: Galaxy clustering as a function of luminosity and color at 0.2 < z < 1

    SciTech Connect (OSTI)

    Skibba, Ramin A.; Smith, M. Stephen M.; Coil, Alison L.; Mendez, Alexander J.; Moustakas, John; Aird, James; Blanton, Michael R.; Bray, Aaron D.; Eisenstein, Daniel J.; Cool, Richard J.; Wong, Kenneth C.; Zhu, Guangtun

    2014-04-01

    We present measurements of the luminosity and color-dependence of galaxy clustering at 0.2 < z < 1.0 in the Prism Multi-object Survey. We quantify the clustering with the redshift-space and projected two-point correlation functions, ?(r{sub p} , ?) and w{sub p} (r{sub p} ), using volume-limited samples constructed from a parent sample of over ?130, 000 galaxies with robust redshifts in seven independent fields covering 9 deg{sup 2} of sky. We quantify how the scale-dependent clustering amplitude increases with increasing luminosity and redder color, with relatively small errors over large volumes. We find that red galaxies have stronger small-scale (0.1 Mpc h {sup 1} < r{sub p} < 1 Mpc h {sup 1}) clustering and steeper correlation functions compared to blue galaxies, as well as a strong color dependent clustering within the red sequence alone. We interpret our measured clustering trends in terms of galaxy bias and obtain values of b {sub gal} ? 0.9-2.5, quantifying how galaxies are biased tracers of dark matter depending on their luminosity and color. We also interpret the color dependence with mock catalogs, and find that the clustering of blue galaxies is nearly constant with color, while redder galaxies have stronger clustering in the one-halo term due to a higher satellite galaxy fraction. In addition, we measure the evolution of the clustering strength and bias, and we do not detect statistically significant departures from passive evolution. We argue that the luminosity- and color-environment (or halo mass) relations of galaxies have not significantly evolved since z ? 1. Finally, using jackknife subsampling methods, we find that sampling fluctuations are important and that the COSMOS field is generally an outlier, due to having more overdense structures than other fields; we find that 'cosmic variance' can be a significant source of uncertainty for high-redshift clustering measurements.

  18. Comparative soil CO2 flux measurements and geostatisticalestimation methods on masaya volcano, nicaragua

    SciTech Connect (OSTI)

    Lewicki, J.L.; Bergfeld, D.; Cardellini, C.; Chiodini, G.; Granieri, D.; Varley, N.; Werner, C.

    2004-04-27

    We present a comparative study of soil CO{sub 2} flux (F{sub CO2}) measured by five groups (Groups 1-5) at the IAVCEI-CCVG Eighth Workshop on Volcanic Gases on Masaya volcano, Nicaragua. Groups 1-5 measured F{sub CO2} using the accumulation chamber method at 5-m spacing within a 900 m{sup 2} grid during a morning (AM) period. These measurements were repeated by Groups 1-3 during an afternoon (PM) period. All measured F{sub CO2} ranged from 218 to 14,719 g m{sup -2}d{sup -1}. Arithmetic means and associated CO{sub 2} emission rate estimates for the AM data sets varied between groups by {+-}22%. The variability of the five measurements made at each grid point ranged from {+-}5 to 167% and increased with the arithmetic mean. Based on a comparison of measurements made by Groups 1-3 during AM and PM times, this variability is likely due in large part to natural temporal variability of gas flow, rather than to measurement error. We compared six geostatistical methods (arithmetic and minimum variance unbiased estimator means of uninterpolated data, and arithmetic means of data interpolated by the multiquadric radial basis function, ordinary kriging, multi-Gaussian kriging, and sequential Gaussian simulation methods) to estimate the mean and associated CO{sub 2} emission rate of one data set and to map the spatial F{sub CO2} distribution. While the CO{sub 2} emission rates estimated using the different techniques only varied by {+-}1.1%, the F{sub CO2} maps showed important differences. We suggest that the sequential Gaussian simulation method yields the most realistic representation of the spatial distribution of F{sub CO2} and is most appropriate for volcano monitoring applications.

  19. Impact of process conditions on the density and durability of wheat, oat, canola, and barley straw briquettes

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Tumuluru, J. S.; Tabil, L. G.; Song, Y.; Iroba, K. L.; Meda, V.

    2014-10-01

    The present study is to understand the impact of process conditions on the quality attributes of wheat oat, barley, and canola straw briquettes. Analysis of variance indicated that briquette moisture content and initial density immediately after compaction and final density after 2 weeks of storage are strong functions of feedstock moisture content and compression pressure, whereas durability rating is influenced by die temperature and feedstock moisture content. Briquettes produced at a low feedstock moisture content of 9 % (w.b.) yielded maximum densities >700 kg/m3 for wheat, oat, canola, and barley straws. Lower feedstock moisture content of <10 % (w.b.) andmore » higher die temperatures >110 °C and compression pressure >10 MPa minimized the briquette moisture content and maximized densities and durability rating based on surface plots observations. Optimal process conditions indicated that a low feedstock moisture content of about 9 % (w.b.), high die temperature of 120–130 °C, medium-to-large hammer mill screen sizes of about 24 to 31.75 mm, and low to high compression pressures of 7.5 to 12.5 MPa minimized briquette moisture content to <8 % (w.b.) and maximized density to >700 kg/m3. Durability rating >90 % is achievable at higher die temperatures of >123 °C, lower to medium feedstock moisture contents of 9 to 12 % (w.b.), low to high compression pressures of 7.5 to 12.5 MPa, and large hammer mill screen size of 31.75 mm, except for canola where a lower compression pressure of 7.5 to 8.5 MPa and a smaller hammer mill screen size of 19 mm for oat maximized the durability rating values.« less

  20. The North Carolina Field Test

    SciTech Connect (OSTI)

    Sharp, T.R.; Ternes, M.P.

    1990-08-01

    The North Carolina Field Test will test the effectiveness of two weatherization approaches: the current North Carolina Low-Income Weatherization Assistance Program and the North Carolina Field Test Audit. The Field Test Audit will differ from North Carolina's current weatherization program in that it will incorporate new weatherization measures and techniques, a procedure for basing measure selection of the characteristics of the individual house and the cost-effectiveness of the measure, and also emphasize cooling energy savings. The field test will determine the differences of the two weatherization approaches from the viewpoints of energy savings, cost effectiveness, and implementation ease. This Experimental Plan details the steps in performing the field test. The field test will be a group effort by several participating organizations. Pre- and post-weatherization data will be collected over a two-year period (November 1989 through August 1991). The 120 houses included in the test will be divided into a control group and two treatment groups (one for each weatherization procedure) of 40 houses each. Weekly energy use data will be collected for each house representing whole-house electric, space heating and cooling, and water heating energy uses. Corresponding outdoor weather and house indoor temperature data will also be collected. The energy savings of each house will be determined using linear-regression based models. To account for variations between the pre- and post-weatherization periods, house energy savings will be normalized for differences in outdoor weather conditions and indoor temperatures. Differences between the average energy savings of treatment groups will be identified using an analysis of variance approach. Differences between energy savings will be quantified using multiple comparison techniques. 9 refs., 8 figs., 5 tabs.

  1. Direct Numerical Simulation of Pore-Scale Flow in a Bead Pack: Comparison with Magnetic Resonance Imaging Observations

    SciTech Connect (OSTI)

    Yang, Xiaofan; Scheibe, Timothy D.; Richmond, Marshall C.; Perkins, William A.; Vogt, Sarah J.; Codd, Sarah L.; Seymour, Joseph D.; Mckinley, Matthew I.

    2013-04-01

    A significant body of current research is aimed at developing methods for numerical simulation of flow and transport in porous media that explicitly resolve complex pore and solid geometries, and at utilizing such models to study the relationships between fundamental pore-scale processes and macroscopic manifestations at larger (i.e., Darcy) scales. A number of different numerical methods for pore-scale simulation have been developed, and have been extensively tested and validated for simplified geometries. However, validation of pore-scale simulations of fluid velocity for complex, three-dimensional (3D) pore geometries that are representative of natural porous media is challenging due to our limited ability to measure pore-scale velocity in such systems. Recent advances in magnetic resonance imaging (MRI) offer the opportunity to measure not only the pore geometry, but also local fluid velocities under steady-state flow conditions in 3D and with high spatial resolution. In this paper, we present a 3D velocity field measured at sub-pore resolution (tens of micrometers) over a centimeter-scale 3D domain using MRI methods. We have utilized the measured pore geometry to perform 3D simulations of Navier-Stokes flow over the same domain using direct numerical simulation techniques. We present a comparison of the numerical simulation results with the measured velocity field. It is shown that the numerical results match the observed velocity patterns well overall except for a variance and small systematic scaling which can be attributed to the known experimental error in the MRI measurements. The comparisons presented here provide strong validation of the pore-scale simulation methods and new insights for interpretation of uncertainty in MRI measurements of pore-scale velocity. This study also provides a potential benchmark for future comparison of other pore-scale simulation methods.

  2. Local structure and disorder in crystalline Pb{sub 9}Al{sub 8}O{sub 21}

    SciTech Connect (OSTI)

    Hannon, Alex C. Barney, Emma R.; Holland, Diane; Knight, Kevin S.

    2008-05-15

    Crystalline Pb{sub 9}Al{sub 8}O{sub 21} is a model compound for the structure of non-linear optical glasses containing lone-pair ions, and its structure has been investigated by neutron powder diffraction and total scattering, and {sup 27}Al magic angle spinning NMR. Rietveld analysis (space group Pa3-bar (No. 205), a=13.25221(4) A) shows that some of the Pb and O sites have partial occupancies, due to lead volatilisation during sample preparation, and the non-stoichiometric sample composition is Pb{sub 9-{delta}}Al{sub 8}O{sub 21-{delta}} with {delta}=0.54. The NMR measurements show evidence for a correlation between the chemical shift and the variance of the bond angles at the aluminium sites. The neutron total correlation function shows that the true average Al-O bond length is 0.8% longer than the apparent bond length determined by Rietveld refinement. The thermal variation in bond length is much smaller than the thermal variation in longer interatomic distances determined by Rietveld refinement. The total correlation function is consistent with an interpretation in which AlO{sub 3} groups with an Al-O bond length of 1.651 A occur as a result of the oxygen vacancies in the structure. The width of the tetrahedral Al-O peak in the correlation function for the crystal is very similar to that for lead aluminate glass, indicating that the extent of static disorder is very similar in the two phases. - Graphical abstract: Combined neutron powder diffraction and total scattering, and {sup 27}Al NMR on crystalline Pb{sub 9}Al{sub 8}O{sub 21} shows it to be a non-stoichiometric compound with vacancies due to PbO volatilisation. A detailed consideration of the thermal and static disorder is given, showing that glass and crystal phases have very similar disorder at short range.

  3. A Multiphase Validation of Atlas-Based Automatic and Semiautomatic Segmentation Strategies for Prostate MRI

    SciTech Connect (OSTI)

    Martin, Spencer; Rodrigues, George; Department of Epidemiology Patil, Nikhilesh; Bauman, Glenn; Department of Radiation Oncology, London Regional Cancer Program, London ; D'Souza, David; Sexton, Tracy; Palma, David; Louie, Alexander V.; Khalvati, Farzad; Tizhoosh, Hamid R.; Segasist Technologies, Toronto, Ontario ; Gaede, Stewart

    2013-01-01

    Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual, N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.

  4. LITERATURE REVIEW OF PUO2 CALCINATION TIME AND TEMPERATURE DATA FOR SPECIFIC SURFACE AREA

    SciTech Connect (OSTI)

    Daniel, G.

    2012-03-06

    The literature has been reviewed in December 2011 for calcination data of plutonium oxide (PuO{sub 2}) from plutonium oxalate Pu(C{sub 2}O{sub 4}){sub 2} precipitation with respect to the PuO{sub 2} specific surface area (SSA). A summary of the literature is presented for what are believed to be the dominant factors influencing SSA, the calcination temperature and time. The PuO{sub 2} from Pu(C{sub 2}O{sub 4}){sub 2} calcination data from this review has been regressed to better understand the influence of calcination temperature and time on SSA. Based on this literature review data set, calcination temperature has a bigger impact on SSA versus time. However, there is still some variance in this data set that may be reflecting differences in the plutonium oxalate preparation or different calcination techniques. It is evident from this review that additional calcination temperature and time data for PuO{sub 2} from Pu(C{sub 2}O{sub 4}){sub 2} needs to be collected and evaluated to better define the relationship. The existing data set has a lot of calcination times that are about 2 hours and therefore may be underestimating the impact of heating time on SSA. SRNL recommends that more calcination temperature and time data for PuO{sub 2} from Pu(C{sub 2}O{sub 4}){sub 2} be collected and this literature review data set be augmented to better refine the relationship between PuO{sub 2} SSA and its calcination parameters.

  5. Toward a new parameterization of hydraulic conductivity in climate models: Simulation of rapid groundwater fluctuations in Northern California

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vrettas, Michail D.; Fung, Inez Y.

    2015-12-31

    Preferential flow through weathered bedrock leads to rapid rise of the water table after the first rainstorms and significant water storage (also known as ‘‘rock moisture’’) in the fractures. We present a new parameterization of hydraulic conductivity that captures the preferential flow and is easy to implement in global climate models. To mimic the naturally varying heterogeneity with depth in the subsurface, the model represents the hydraulic conductivity as a product of the effective saturation and a background hydraulic conductivity Kbkg, drawn from a lognormal distribution. The mean of the background Kbkg decreases monotonically with depth, while its variance reducesmore » with the effective saturation. Model parameters are derived by assimilating into Richards’ equation 6 years of 30 min observations of precipitation (mm) and water table depths (m), from seven wells along a steep hillslope in the Eel River watershed in Northern California. The results show that the observed rapid penetration of precipitation and the fast rise of the water table from the well locations, after the first winter rains, are well captured with the new stochastic approach in contrast to the standard van Genuchten model of hydraulic conductivity, which requires significantly higher levels of saturated soils to produce the same results. ‘‘Rock moisture,’’ the moisture between the soil mantle and the water table, comprises 30% of the moisture because of the great depth of the weathered bedrock layer and could be a potential source of moisture to sustain trees through extended dry periods. Moreover, storage of moisture in the soil mantle is smaller, implying less surface runoff and less evaporation, with the proposed new model.« less

  6. THE GRAVITATIONAL POTENTIAL NEAR THE SUN FROM SEGUE K-DWARF KINEMATICS

    SciTech Connect (OSTI)

    Zhang Lan; Liu Chao; Zhao Gang; Rix, Hans-Walter; Van de Ven, Glenn; Bovy, Jo

    2013-08-01

    To constrain the Galactic gravitational potential near the Sun ({approx}1.5 kpc), we derive and model the spatial and velocity distributions for a sample of 9000 K-dwarfs with spectra from SDSS/SEGUE, which yield radial velocities and abundances ([Fe/H] and [{alpha}/Fe]). We first derive the spatial density distribution for three abundance-selected sub-populations of stars accounting for the survey's selection function. The vertical profiles of these sub-populations are simple exponentials and their vertical dispersion profile is nearly isothermal. To model these data, we apply the 'vertical' Jeans equation, which relates the observable tracer number density and vertical velocity dispersion to the gravitational potential or vertical force. We explore a number of functional forms for the vertical force law, fit the dispersion and density profiles of all abundance-selected sub-populations simultaneously in the same potential, and explore all parameter co-variances using a Markov Chain Monte Carlo technique. Our fits constrain a disk mass scale height {approx}< 300 pc and the total surface mass density to be 67 {+-} 6 M{sub Sun} pc{sup -2} at |z| = 1.0 kpc of which the contribution from all stars is 42 {+-} 5 M{sub Sun} pc{sup -2} (assuming a contribution from cold gas of 13 M{sub Sun} pc{sup -2}). We find significant constraints on the local dark matter density of 0.0065 {+-} 0.0023 M{sub Sun} pc{sup -3} (0.25 {+-} 0.09 GeV cm{sup -3}). Together with recent experiments this firms up the best estimate of 0.0075 {+-} 0.0021 M{sub Sun} pc{sup -3} (0.28 {+-} 0.08 GeV cm{sup -3}), consistent with global fits of approximately round dark matter halos to kinematic data in the outskirts of the Galaxy.

  7. SU-E-T-426: Dose Delivery Accuracy in Breast Field Junction for Free Breath and Deep Inspiration Breath Hold Techniques

    SciTech Connect (OSTI)

    Epstein, D; Shekel, E; Levin, D

    2014-06-01

    Purpose: The purpose of this work was to verify the accuracy of the dose distribution along the field junction in a half beam irradiation technique for breast cancer patients receiving radiation to the breast or chest wall (CW) and the supraclavicular LN region for both free breathing and deep inspiration breath hold (DIBH) technique. Methods: We performed in vivo measurements for nine breast cancer patients receiving radiation to the breast/CW and to the supraclavicular LN region. Six patients were treated to the left breast/CW using DIBH technique and three patients were treated to the right breast/CW in free breath. We used five microMOSFET dosimeters: three located along the field junction, one located 1 cm above the junction and the fifth microMOSFET located 1 cm below the junction. We performed consecutive measurements over several days for each patient and compared the measurements to the TPS calculation (Eclipse, Varian). Results: The calculated and measured doses along the junction were 0.970.08 Gy and 1.020.14 Gy, respectively. Above the junction calculated and measured doses were 0.910.08 Gy and 0.980.09 Gy respectively, and below the junction calculated and measured doses were 1.700.15 Gy and 1.610.09 Gy, respectively. All differences were not statistically significant. When comparing calculated and measured doses for DIBH patients only, there was still no statistically significant difference between values for all dosimeter locations. Analysis was done using the Mann-Whitney Rank-Sum Test. Conclusion: We found excellent correlation between calculated doses from the TPS and measured skin doses at the junction of several half beam fields. Even for the DIBH technique, where there is more potential for variance due to depth of breath, there is no over or underdose along the field junction. This correlation validates the TPS, as well an accurate, reproducible patient setup.

  8. No-migration determination. Annual report, September 1, 1993--August 31, 1994

    SciTech Connect (OSTI)

    Not Available

    1994-11-01

    This report fulfills the annual reporting requirement as specified in the Conditional No-Migration Determination (NMD) for the U.S. Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP), published in the Federal Register on November 14, 1990 (EPA, 1990a). This report covers the project activities, programs, and data obtained during the period September 1, 1993, through August 31, 1994, to support compliance with the NMD`. In the NMD, the U.S. Environmental Protection Agency (EPA) concluded that the DOE had demonstrated, to a reasonable degree of certainty, that hazardous constituents will not migrate from the WIPP disposal unit during the test phase of the project, and that the DOE had otherwise met the requirements of 40 CFR Part 268.6, Petitions to Allow Land Disposal of a Waste Prohibited Under Subpart C of Part 268 (EPA, 1986a), for the WIPP facility. By granting the NMD, the EPA has allowed the DOE to temporarily manage defense-generated transuranic (TRU) mixed wastes, some of which are prohibited from land disposal by Title 40 CFR Part 268, Land Disposal Restrictions (EPA, 1986a), at the WIPP facility for the purposes of testing and experimentation for a period not to exceed 10 years. In granting the NMD, the EPA imposed several conditions on the management of the experimental waste used during the WIPP test phase. One of these conditions is that the DOE submit annual reports to the EPA to demonstrate the WIPP`s compliance with the requirements of the NMD. In the proposed No-Migration Variance (EPA, 1990b) and the final NMD, the EPA defined the content and parameters that must be reported on an annual basis. These reporting requirements are summarized and are cross-referenced with the sections of the report that satisfy the respective requirement.

  9. An adaptive multi-level simulation algorithm for stochastic biological systems

    SciTech Connect (OSTI)

    Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics, SIAM Multiscale Model. Simul. 10(1), 146179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of ?. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where ? is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.

  10. The importance of retaining a phylogenetic perspective in traits-based community analyses

    SciTech Connect (OSTI)

    Poteat, Monica D; Buchwalter, David; Jacobus, Luke

    2015-01-01

    1) Many environmental stressors manifest their effects via physiological processes (traits) that can differ significantly among species and species groups. We compiled available data for three traits related to the bioconcentration of the toxic metal cadmium (Cd) from 42 aquatic insect species representing orders Ephemeroptera (mayfly), Plecoptera (stonefly), and Trichoptera (caddisfly). These traits included the propensity to take up Cd from water (uptake rate constant, ku), the ability to excrete Cd (efflux rate constant, ke), and the net result of these two processes (bioconcentration factor, BCF). 2) Ranges in these Cd bioaccumulation traits varied in magnitude across lineages (some lineages had a greater tendency to bioaccumulate Cd than others). Overlap in the ranges of trait values among different lineages was common and highlights situations where species from different lineages can share a similar trait state, but represent the high end of possible physiological values for one lineage and the low end for another. 3) Variance around the mean trait state differed widely across clades, suggesting that some groups (e.g., Ephemerellidae) are inherently more variable than others (e.g., Perlidae). Thus, trait variability/lability is at least partially a function of lineage. 4) Akaike information criterion (AIC) comparisons of statistical models were more often driven by clade than by other potential biological or ecological explanation tested. Clade-driven models generally improved with increasing taxonomic resolution. 5) Together, these findings suggest that lineage provides context for the analysis of species traits, and that failure to consider lineage in community-based analysis of traits may obscure important patterns of species responses to environmental change.

  11. The pulmonary response of white and black adults to six concentrations of ozone

    SciTech Connect (OSTI)

    Seal, E. Jr.; McDonnell, W.F.; House, D.E.; Salaam, S.A.; Dewitt, P.J.; Butler, S.O.; Green, J.; Raggio, L. )

    1993-04-01

    Many early studies of respiratory responsiveness to ozone (O3) were done on healthy, young, white males. The purpose of this study was to determine whether gender or race differences in O3 response exist among white and black, males and females, and to develop concentration-response curves for each of the gender-race groups. Three hundred seventy-two subjects (n > 90 in each gender-race group), ages 18 to 35 yr, were exposed once for 2.33 h to 0.0 (purified air), 0.12, 0.18, 0.24, 0.30, or 0.40 ppm O3. Each exposure was preceded by baseline pulmonary function tests and a symptom questionnaire. The first 2 h of exposure included alternating 15-min periods of rest and exercise on a motorized treadmill producing a minute ventilation (VE) of 25 L/min/m2 body surface area (BSA). After exposure, subjects completed a set of pulmonary function tests and a symptom questionnaire. Lung function and symptom responses were expressed as percent change from baseline and analyzed using a nonparametric two factor analysis of variance. Three primary variables were analyzed: FEV1, specific airway resistance (SRaw), and cough. Statistical analysis demonstrated no significant differences in response to O3 among the individual gender-race groups. For the group as a whole, changes in the variables FEV1, SRaw, and cough were first noted at 0.12, 0.18, and 0.18 ppm O3, respectively. Adjusted for exercise difference, concentration-response curves for FEV1 and cough among white males were consistent with previous reports (1).

  12. Comment on the Word 'Cooling' as it is Used in Beam Physics

    SciTech Connect (OSTI)

    Sessler, Andrew M.

    2005-09-10

    The Institute of Medicine (IOM) of the National Academy of Sciences recently completed a critical review of the scientific literature pertaining to the association of indoor dampness and mold contamination with adverse health effects. In this paper, we report the results of quantitative meta-analysis of the studies reviewed in the IOM report. We developed point estimates and confidence intervals (CIs) to summarize the association of several respiratory and asthma-related health outcomes with the presence of dampness and mold in homes. The odds ratios and confidence intervals from the original studies were transformed to the log scale and random effect models were applied to the log odds ratios and their variance. Models were constructed both accounting for the correlation between multiple results within the studies analyzed and ignoring such potential correlation. Central estimates of ORs for the health outcomes ranged from 1.32 to 2.10, with most central estimates between 1.3 and 1.8. Confidence intervals (95%) excluded unity except in two of 28 instances, and in most cases the lower bound of the CI exceeded 1.2. In general, the two meta-analysis methods produced similar estimates for ORs and CIs. Based on the results of the meta-analyses, building dampness and mold are associated with approximately 30% to 80% increases in a variety of respiratory and asthma-related health outcomes. The results of these meta-analyses reinforce the IOM's recommendation that actions be taken to prevent and reduce building dampness problems.

  13. Meta-Analyses of the Associations of Respiratory Health Effectswith Dampness and Mold in Homes

    SciTech Connect (OSTI)

    Fisk, William J.; Lei-Gomez, Quanhong; Mendell, Mark J.

    2006-01-01

    The Institute of Medicine (IOM) of the National Academy of Sciences recently completed a critical review of the scientific literature pertaining to the association of indoor dampness and mold contamination with adverse health effects. In this paper, we report the results of quantitative meta-analysis of the studies reviewed in the IOM report. We developed point estimates and confidence intervals (CIs) to summarize the association of several respiratory and asthma-related health outcomes with the presence of dampness and mold in homes. The odds ratios and confidence intervals from the original studies were transformed to the log scale and random effect models were applied to the log odds ratios and their variance. Models were constructed both accounting for the correlation between multiple results within the studies analyzed and ignoring such potential correlation. Central estimates of ORs for the health outcomes ranged from 1.32 to 2.10, with most central estimates between 1.3 and 1.8. Confidence intervals (95%) excluded unity except in two of 28 instances, and in most cases the lower bound of the CI exceeded 1.2. In general, the two meta-analysis methods produced similar estimates for ORs and CIs. Based on the results of the meta-analyses, building dampness and mold are associated with approximately 30% to 80% increases in a variety of respiratory and asthma-related health outcomes. The results of these meta-analyses reinforce the IOM's recommendation that actions be taken to prevent and reduce building dampness problems.

  14. Correlation function analysis of the COBE differential microwave radiometer sky maps

    SciTech Connect (OSTI)

    Lineweaver, C.H.

    1994-08-01

    The Differential Microwave Radiometer (DMR) aboard the COBE satellite has detected anisotropies in the cosmic microwave background (CMB) radiation. A two-point correlation function analysis which helped lead to this discovery is presented in detail. The results of a correlation function analysis of the two year DMR data set is presented. The first and second year data sets are compared and found to be reasonably consistent. The positive correlation for separation angles less than {approximately}20{degree} is robust to Galactic latitude cuts and is very stable from year to year. The Galactic latitude cut independence of the correlation function is strong evidence that the signal is not Galactic in origin. The statistical significance of the structure seen in the correlation function of the first, second and two year maps is respectively > 9{sigma}, > 10{sigma} and > 18{sigma} above the noise. The noise in the DMR sky maps is correlated at a low level. The structure of the pixel temperature covariance matrix is given. The noise covariance matrix of a DMR sky map is diagonal to an accuracy of better than 1%. For a given sky pixel, the dominant noise covariance occurs with the ring of pixels at an angular separation of 60{degree} due to the 60{degree} separation of the DMR horns. The mean covariance of 60{degree} is 0.45%{sub {minus}0.14}{sup +0.18} of the mean variance. The noise properties of the DMR maps are thus well approximated by the noise properties of maps made by a single-beam experiment. Previously published DMR results are not significantly affected by correlated noise.

  15. Air-injection testing in vertical boreholes in welded and nonwelded Tuff, Yucca Mountain, Nevada

    SciTech Connect (OSTI)

    LeCain, G.D.

    1997-12-31

    Air-injection tests, by use of straddle packers, were done in four vertical boreholes (UE-25 UZ-No.16, USW SD-12, USW NRG-6, and USW NRG-7a) at Yucca Mountain, Nevada. The geologic units tested were the Tiva Canyon Tuff, nonwelded tuffs of the Paintbrush Group, Topopah Spring Tuff, and Calico Hills Formation. Air-injection permeability values of the Tiva Canyon Tuff ranged from 0.3 x 10{sup -12} to 54.0 x 10{sup -12} m{sup 2}(square meter). Air-injection permeability values of the Paintbrush nonwelded tuff ranged from 0.12 x 10{sup -12} to 3.0 x 10{sup -12} m{sup 2}. Air-injection permeability values of the Topopah Spring Tuff ranged from 0.02 x 10{sup -12} to 33.0 x 10{sup -12} m{sup 2}. The air-injection permeability value of the only Calico Hills Formation interval tested was 0.025 x 10{sup -12} m{sup 2}. The shallow test intervals of the Tiva Canyon Tuff had the highest air-injection permeability values. Variograms of the air-injection permeability values of the Topopah Spring Tuff show a hole effect; an initial increase in the variogram values is followed by a decrease. The hole effect is due to the decrease in permeability with depth identified in several geologic zones. The hole effect indicates some structural control of the permeability distribution, possibly associated with the deposition and cooling of the tuff. Analysis of variance indicates that the air-injection permeability values of borehole NRG-7a of the Topopah Spring Tuff are different from the other boreholes; this indicates areal variation in permeability.

  16. Coyote Springs Cogeneration Project, Morrow County, Oregon: Draft Environmental Impact Statement.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    1994-01-01

    BPA is considering whether to transfer (wheel) electrical power from a proposed privately-owned, combustion-turbine electrical generation plant in Oregon. The plant would be fired by natural gas and would use combined-cycle technology to generate up to 440 average megawatts (aMW) of energy. The plant would be developed, owned, and operated by Portland General Electric Company (PGE). The project would be built in eastern Oregon, just east of the City of Boardman in Morrow County. The proposed plant would be built on a site within the Port of Morrow Industrial Park. The proposed use for the site is consistent with the County land use plan. Building the transmission line needed to interconnect the power plant to BPA`s transmission system would require a variance from Morrow County. BPA would transfer power from the plant to its McNary-Slatt 500-kV transmission line. PGE would pay BPA for wheeling services. Key environmental concerns identified in the scoping process and evaluated in the draft Environmental Impact Statement (DEIS) include these potential impacts: (1) air quality impacts, such as emissions and their contributions to the {open_quotes}greenhouse{close_quotes} effect; (2) health and safety impacts, such as effects of electric and magnetic fields, (3) noise impacts, (4) farmland impacts, (5) water vapor impacts to transportation, (6) economic development and employment impacts, (7) visual impacts, (8) consistency with local comprehensive plans, and (9) water quality and supply impacts, such as the amount of wastewater discharged, and the source and amount of water required to operate the plant. These and other issues are discussed in the DEIS. The proposed project includes features designed to reduce environmental impacts. Based on studies completed for the DEIS, adverse environmental impacts associated with the proposed project were identified, and no evidence emerged to suggest that the proposed action is controversial.

  17. Uncertainty quantification of CO? saturation estimated from electrical resistance tomography data at the Cranfield site

    SciTech Connect (OSTI)

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; Ramirez, Abelardo L.

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO? saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO? saturation, but we focus on how the ERT observation errors propagate to the estimated CO? saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the prior information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO? saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO? saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO? saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO? saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO? saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.

  18. A NEW ALGORITHM FOR RADIOISOTOPE IDENTIFICATION OF SHIELDED AND MASKED SNM/RDD MATERIALS

    SciTech Connect (OSTI)

    Jeffcoat, R.

    2012-06-05

    Detection and identification of shielded and masked nuclear materials is crucial to national security, but vast borders and high volumes of traffic impose stringent requirements for practical detection systems. Such tools must be be mobile, and hence low power, provide a low false alarm rate, and be sufficiently robust to be operable by non-technical personnel. Currently fielded systems have not achieved all of these requirements simultaneously. Transport modeling such as that done in GADRAS is able to predict observed spectra to a high degree of fidelity; our research is focusing on a radionuclide identification algorithm that inverts this modeling within the constraints imposed by a handheld device. Key components of this work include incorporation of uncertainty as a function of both the background radiation estimate and the hypothesized sources, dimensionality reduction, and nonnegative matrix factorization. We have partially evaluated performance of our algorithm on a third-party data collection made with two different sodium iodide detection devices. Initial results indicate, with caveats, that our algorithm performs as good as or better than the on-board identification algorithms. The system developed was based on a probabilistic approach with an improved approach to variance modeling relative to past work. This system was chosen based on technical innovation and system performance over algorithms developed at two competing research institutions. One key outcome of this probabilistic approach was the development of an intuitive measure of confidence which was indeed useful enough that a classification algorithm was developed based around alarming on high confidence targets. This paper will present and discuss results of this novel approach to accurately identifying shielded or masked radioisotopes with radiation detection systems.

  19. SU-E-QI-21: Iodinated Contrast Agent Time Course In Human Brain Metastasis: A Study For Stereotactic Synchrotron Radiotherapy Clinical Trials

    SciTech Connect (OSTI)

    Obeid, L; Esteve, F; Adam, J; Tessier, A; Balosso, J

    2014-06-15

    Purpose: Synchrotron stereotactic radiotherapy (SSRT) is an innovative treatment combining the selective accumulation of heavy elements in tumors with stereotactic irradiations using monochromatic medium energy x-rays from a synchrotron source. Phase I/II clinical trials on brain metastasis are underway using venous infusion of iodinated contrast agents. The radiation dose enhancement depends on the amount of iodine in the tumor and its time course. In the present study, the reproducibility of iodine concentrations between the CT planning scan day (Day 0) and the treatment day (Day 10) was assessed in order to predict dose errors. Methods: For each of days 0 and 10, three patients received a biphasic intravenous injection of iodinated contrast agent (40 ml, 4 ml/s, followed by 160 ml, 0.5 ml/s) in order to ensure stable intra-tumoral amounts of iodine during the treatment. Two volumetric CT scans (before and after iodine injection) and a multi-slice dynamic CT of the brain were performed using conventional radiotherapy CT (Day 0) or quantitative synchrotron radiation CT (Day 10). A 3D rigid registration was processed between images. The absolute and relative differences of absolute iodine concentrations and their corresponding dose errors were evaluated in the GTV and PTV used for treatment planning. Results: The differences in iodine concentrations remained within the standard deviation limits. The 3D absolute differences followed a normal distribution centered at zero mg/ml with a variance (∼1 mg/ml) which is related to the image noise. Conclusion: The results suggest that dose errors depend only on the image noise. This study shows that stable amounts of iodine are achievable in brain metastasis for SSRT treatment in a 10 days interval.

  20. Preemptible I/O Scheduling of Garbage Collection for Solid State Drives

    SciTech Connect (OSTI)

    Lee, Junghee; Kim, Youngjae; Shipman, Galen M; Oral, H Sarp; Kim, Jongman

    2012-01-01

    Abstract Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governed by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptible GC scheme that allows GC processing to be preempted while pending I/O requests in the queue are serviced. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semipreemptible GC scheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptible GC scheme. In addition, we explore opportunities of a new NAND flash device that supports suspend/resume commands for read, write and erase operations for fully preemptible GC. Our experiments with a fully preemptible GC enabled flash device show that request response time can be improved by up to 14.57% compared to semi-preemptible GC.

  1. Modeling and comparative assessment of municipal solid waste gasification for energy production

    SciTech Connect (OSTI)

    Arafat, Hassan A. Jijakli, Kenan

    2013-08-15

    Highlights: Study developed a methodology for the evaluation of gasification for MSW treatment. Study was conducted comparatively for USA, UAE, and Thailand. Study applies a thermodynamic model (Gibbs free energy minimization) using the Gasify software. The energy efficiency of the process and the compatibility with different waste streams was studied. - Abstract: Gasification is the thermochemical conversion of organic feedstocks mainly into combustible syngas (CO and H{sub 2}) along with other constituents. It has been widely used to convert coal into gaseous energy carriers but only has been recently looked at as a process for producing energy from biomass. This study explores the potential of gasification for energy production and treatment of municipal solid waste (MSW). It relies on adapting the theory governing the chemistry and kinetics of the gasification process to the use of MSW as a feedstock to the process. It also relies on an equilibrium kinetics and thermodynamics solver tool (Gasify) in the process of modeling gasification of MSW. The effect of process temperature variation on gasifying MSW was explored and the results were compared to incineration as an alternative to gasification of MSW. Also, the assessment was performed comparatively for gasification of MSW in the United Arab Emirates, USA, and Thailand, presenting a spectrum of socioeconomic settings with varying MSW compositions in order to explore the effect of MSW composition variance on the products of gasification. All in all, this study provides an insight into the potential of gasification for the treatment of MSW and as a waste to energy alternative to incineration.

  2. Impact of process conditions on the density and durability of wheat, oat, canola, and barley straw briquettes

    SciTech Connect (OSTI)

    Tumuluru, J. S.; Tabil, L. G.; Song, Y.; Iroba, K. L.; Meda, V.

    2014-10-01

    The present study is to understand the impact of process conditions on the quality attributes of wheat oat, barley, and canola straw briquettes. Analysis of variance indicated that briquette moisture content and initial density immediately after compaction and final density after 2 weeks of storage are strong functions of feedstock moisture content and compression pressure, whereas durability rating is influenced by die temperature and feedstock moisture content. Briquettes produced at a low feedstock moisture content of 9 % (w.b.) yielded maximum densities >700 kg/m3 for wheat, oat, canola, and barley straws. Lower feedstock moisture content of <10 % (w.b.) and higher die temperatures >110 C and compression pressure >10 MPa minimized the briquette moisture content and maximized densities and durability rating based on surface plots observations. Optimal process conditions indicated that a low feedstock moisture content of about 9 % (w.b.), high die temperature of 120130 C, medium-to-large hammer mill screen sizes of about 24 to 31.75 mm, and low to high compression pressures of 7.5 to 12.5 MPa minimized briquette moisture content to <8 % (w.b.) and maximized density to >700 kg/m3. Durability rating >90 % is achievable at higher die temperatures of >123 C, lower to medium feedstock moisture contents of 9 to 12 % (w.b.), low to high compression pressures of 7.5 to 12.5 MPa, and large hammer mill screen size of 31.75 mm, except for canola where a lower compression pressure of 7.5 to 8.5 MPa and a smaller hammer mill screen size of 19 mm for oat maximized the durability rating values.

  3. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; Ramirez, Abelardo L.

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  4. On-Site Pilot Study - Removal of Uranium, Radium-226 and Arsenic from Impacted Leachate by Reverse Osmosis - 13155

    SciTech Connect (OSTI)

    McMurray, Allan; Everest, Chris; Rilling, Ken; Vandergaast, Gary; LaMonica, David

    2013-07-01

    Conestoga-Rovers and Associates (CRA-LTD) performed an on-site pilot study at the Welcome Waste Management Facility in Port Hope, Ontario, Canada, to evaluate the effectiveness of a unique leachate treatment process for the removal of radioactive contaminants from leachate impacted by low-level radioactive waste. Results from the study also provided the parameters needed for the design of the CRA-LTD full scale leachate treatment process design. The final effluent water quality discharged from the process to meet the local surface water discharge criteria. A statistical software package was utilized to obtain the analysis of variance (ANOVA) for the results from design of experiment applied to determine the effect of the evaluated factors on the measured responses. The factors considered in the study were: percent of reverse osmosis permeate water recovery, influent coagulant dosage, and influent total dissolved solids (TDS) dosage. The measured responses evaluated were: operating time, average specific flux, and rejection of radioactive contaminants along with other elements. The ANOVA for the design of experiment results revealed that the operating time is affected by the percent water recovery to be achieved and the flocculant dosage over the range studied. The average specific flux and rejection for the radioactive contaminants were not affected by the factors evaluated over the range studied. The 3 month long on-site pilot testing on the impacted leachate revealed that the CRA-LTD leachate treatment process was robust and produced an effluent water quality that met the surface water discharge criteria mandated by the Canadian Nuclear Safety Commission and the local municipality. (authors)

  5. ASSIMILATION OF DOPPLER RADAR DATA INTO NUMERICAL WEATHER MODELS

    SciTech Connect (OSTI)

    Chiswell, S.; Buckley, R.

    2009-01-15

    During the year 2008, the United States National Weather Service (NWS) completed an eight fold increase in sampling capability for weather radars to 250 m resolution. This increase is expected to improve warning lead times by detecting small scale features sooner with increased reliability; however, current NWS operational model domains utilize grid spacing an order of magnitude larger than the radar data resolution, and therefore the added resolution of radar data is not fully exploited. The assimilation of radar reflectivity and velocity data into high resolution numerical weather model forecasts where grid spacing is comparable to the radar data resolution was investigated under a Laboratory Directed Research and Development (LDRD) 'quick hit' grant to determine the impact of improved data resolution on model predictions with specific initial proof of concept application to daily Savannah River Site operations and emergency response. Development of software to process NWS radar reflectivity and radial velocity data was undertaken for assimilation of observations into numerical models. Data values within the radar data volume undergo automated quality control (QC) analysis routines developed in support of this project to eliminate empty/missing data points, decrease anomalous propagation values, and determine error thresholds by utilizing the calculated variances among data values. The Weather Research and Forecasting model (WRF) three dimensional variational data assimilation package (WRF-3DVAR) was used to incorporate the QC'ed radar data into input and boundary conditions. The lack of observational data in the vicinity of SRS available to NWS operational models signifies an important data void where radar observations can provide significant input. These observations greatly enhance the knowledge of storm structures and the environmental conditions which influence their development. As the increase in computational power and availability has made higher resolution real-time model simulations possible, the need to obtain observations to both initialize numerical models and verify their output has become increasingly important. The assimilation of high resolution radar observations therefore provides a vital component in the development and utility of numerical model forecasts for both weather forecasting and contaminant transport, including future opportunities to improve wet deposition computations explicitly.

  6. Cosmological implications of the CMB large-scale structure

    SciTech Connect (OSTI)

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ?CDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ?0.1% and the low-l multipole alignment would be unlikely at the ?4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ?0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ?710%, depending on the number of large-scale SachsWolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ?7%, much more likely than in ?CDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a maximum fluctuation size at the time of recombination, which is absent in the latter because of inflation.

  7. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect (OSTI)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soils characteristics. Most often, spatial variability in the soils fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soils fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the fields spatial variability.

  8. RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE

    SciTech Connect (OSTI)

    Matthews, Daniel J.; Newman, Jeffrey A., E-mail: djm70@pitt.ed, E-mail: janewman@pitt.ed [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara Street, Pittsburgh, PA 15260 (United States)

    2010-09-20

    Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alone Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that sample large areas of sky (>{approx}10{sup 0}-100{sup 0}), but dominant for {approx}1 deg{sup 2} fields. We conclude by presenting a step-by-step, optimized recipe for reconstructing redshift distributions from cross-correlation information using standard correlation measurements.

  9. SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise

    SciTech Connect (OSTI)

    Oliver, J; Budzevich, M; Zhang, G; Latifi, K; Dilling, T; Balagurunathan, Y; Gu, Y; Grove, O; Feygelman, V; Gillies, R; Moros, E; Lee, H.

    2014-06-15

    Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. A total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.

  10. SU-E-J-156: Preclinical Inverstigation of Dynamic Tumor Tracking Using Vero SBRT Linear Accelerator: Motion Phantom Dosimetry Study

    SciTech Connect (OSTI)

    Mamalui-Hunter, M; Wu, J; Li, Z; Su, Z

    2014-06-01

    Purpose: Following the end-to-end testing paradigm of Dynamic Target Tracking option in our Image-Guided dedicated SBRT VeroTM linac, we verify the capability of the system to deliver planned dose to moving targets in the heterogeneous thorax phantom (CIRSTM). The system includes gimbaled C-band linac head, robotic 6 degree of freedom couch and a tumor tracking method based on predictive modeling of target position using fluoroscopically tracked implanted markers and optically tracked infrared reflecting external markers. Methods: 4DCT scan of the motion phantom with the VisicoilTM implanted marker in the close vicinity of the target was acquired, the exhale=most prevalent phase was used for planning (iPlan by BrainLabTM). Typical 3D conformal SBRT treatment plans aimed to deliver 6-8Gy/fx to two types of targets: a)solid water-equivalent target 3cm in diameter; b)single VisicoilTM marker inserted within lung equivalent material. The planning GTV/CTV-to-PTV margins were 2mm, the block margins were 3 mm. The dose calculated by MonteCarlo algorithm with 1% variance using option Dose-to-water was compared to the ion chamber (CC01 by IBA Dosimetry) measurements in case (a) and GafchromicTM EBT3 film measurements in case (b). During delivery, the target 6 motion patterns available as a standard on CIRSTM motion phantom were investigated: in case (a), the target was moving along the designated sine or cosine4 3D trajectory; in case (b), the inserted marker was moving sinusoidally in 1D. Results: The ion chamber measurements have shown the agreement with the planned dose within 1% under all the studied motion conditions. The film measurements show 98.1% agreement with the planar calculated dose (gamma criteria: 3%/3mm). Conclusion: We successfully verified the capability of the SBRT VeroTM linac to perform real-time tumor tracking and accurate dose delivery to the target, based on predictive modeling of the correlation between implanted marker motion and external surrogate of breathing motion.

  11. Accelerated evolution of the Ly? luminosity function at z ? 7 revealed by the Subaru ultra-deep survey for Ly? emitters at z = 7.3

    SciTech Connect (OSTI)

    Konno, Akira; Ouchi, Masami; Ono, Yoshiaki; Shibuya, Takatoshi; Naito, Yoshiaki; Momose, Rieko; Yuma, Suraphong; Shimasaku, Kazuhiro; Nakajima, Kimihiko; Furusawa, Hisanori; Iye, Masanori

    2014-12-10

    We present the ultra-deep Subaru narrowband imaging survey for Ly? emitters (LAEs) at z = 7.3 in the Subaru/XMM-Newton Deep Survey (SXDS) and Cosmic Evolution Survey (COSMOS) fields (?0.5 deg{sup 2}) with a total integration time of 106 hr. Exploiting our new sharp bandwidth filter, NB101, installed on the Suprime-Cam, we have reached L(Ly?) = 2.4 10{sup 42} erg s{sup 1} (5?) for z = 7.3 LAEs, about four times deeper than previous Subaru z ? 7 studies, which allows us to reliably investigate the evolution of the Ly? luminosity function (LF) for the first time down to the luminosity limit same as those of Subaru z = 3.1-6.6 LAE samples. Surprisingly, we only find three and four LAEs in the SXDS and COSMOS fields, respectively, while one expects a total of ?65 LAEs by our survey in the case of no Ly? LF evolution from z = 6.6 to 7.3. We identify a decrease of the Ly? LF from z = 6.6 to 7.3 at the >90% confidence level from our z = 7.3 Ly? LF with the best-fit Schechter parameters of L{sub Ly?}{sup ?}=2.7{sub ?1.2}{sup +8.0}10{sup 42} erg s{sup ?1} and ?{sup ?}=3.7{sub ?3.3}{sup +17.6}10{sup ?4} Mpc{sup ?3} for a fixed ? = 1.5. Moreover, the evolution of the Ly? LF is clearly accelerated at z > 6.6 beyond the measurement uncertainties including cosmic variance. Because no such accelerated evolution of the UV-continuum LF or the cosmic star formation rate (SFR) is found at z ? 7, but suggested only at z > 8, this accelerated Ly? LF evolution is explained by physical mechanisms different from a pure SFR decrease but related to the Ly? production and escape in the process of cosmic reionization. Because a simple accelerating increase of intergalactic medium neutral hydrogen absorbing Ly? cannot be reconciled with Thomson scattering of optical depth measurements from WMAP and Planck, our findings may support new physical pictures suggested by recent theoretical studies, such as the existence of HI clumpy clouds within cosmic ionized bubbles that are selectively absorbing Ly? and the large ionizing photon escape fraction of galaxies causing weak Ly? emission.

  12. Faint submillimeter galaxies revealed by multifield deep ALMA observations: number counts, spatial clustering, and a dark submillimeter line emitter

    SciTech Connect (OSTI)

    Ono, Yoshiaki; Ouchi, Masami; Momose, Rieko; Kurono, Yasutaka

    2014-11-01

    We present the statistics of faint submillimeter/millimeter galaxies (SMGs) and serendipitous detections of a submillimeter/millimeter line emitter (SLE) with no multi-wavelength continuum counterpart revealed by the deep ALMA observations. We identify faint SMGs with flux densities of 0.1-1.0 mJy in the deep Band-6 and Band-7 maps of 10 independent fields that reduce cosmic variance effects. The differential number counts at 1.2 mm are found to increase with decreasing flux density down to 0.1 mJy. Our number counts indicate that the faint (0.1-1.0 mJy, or SFR{sub IR} ? 30-300 M {sub ?} yr{sup 1}) SMGs contribute nearly a half of the extragalactic background light (EBL), while the remaining half of the EBL is mostly contributed by very faint sources with flux densities of <0.1 mJy (SFR{sub IR} ? 30 M {sub ?} yr{sup 1}). We conduct counts-in-cells analysis with multifield ALMA data for the faint SMGs, and obtain a coarse estimate of galaxy bias, b {sub g} < 4. The galaxy bias suggests that the dark halo masses of the faint SMGs are ? 7 10{sup 12} M {sub ?}, which is smaller than those of bright (>1 mJy) SMGs, but consistent with abundant high-z star-forming populations, such as sBzKs, LBGs, and LAEs. Finally, we report the serendipitous detection of SLE-1, which has no continuum counterparts in our 1.2 mm-band or multi-wavelength images, including ultra deep HST/WFC3 and Spitzer data. The SLE has a significant line at 249.9 GHz with a signal-to-noise ratio of 7.1. If the SLE is not a spurious source made by the unknown systematic noise of ALMA, the strong upper limits of our multi-wavelength data suggest that the SLE would be a faint galaxy at z ? 6.

  13. Project Management Plan for the Idaho National Engineering Laboratory Waste Isolation Pilot Plant Experimental Test Program

    SciTech Connect (OSTI)

    Connolly, M.J.; Sayer, D.L.

    1993-11-01

    EG&G Idaho, Inc. and Argonne National Laboratory-West (ANL-W) are participating in the Idaho National Engineering Laboratory`s (INEL`s) Waste Isolation Pilot Plant (WIPP) Experimental Test Program (WETP). The purpose of the INEL WET is to provide chemical, physical, and radiochemical data on transuranic (TRU) waste to be stored at WIPP. The waste characterization data collected will be used to support the WIPP Performance Assessment (PA), development of the disposal No-Migration Variance Petition (NMVP), and to support the WIPP disposal decision. The PA is an analysis required by the Code of Federal Regulations (CFR), Title 40, Part 191 (40 CFR 191), which identifies the processes and events that may affect the disposal system (WIPP) and examines the effects of those processes and events on the performance of WIPP. A NMVP is required for the WIPP by 40 CFR 268 in order to dispose of land disposal restriction (LDR) mixed TRU waste in WIPP. It is anticipated that the detailed Resource Conservation and Recovery Act (RCRA) waste characterization data of all INEL retrievably-stored TRU waste to be stored in WIPP will be required for the NMVP. Waste characterization requirements for PA and RCRA may not necessarily be identical. Waste characterization requirements for the PA will be defined by Sandia National Laboratories. The requirements for RCRA are defined in 40 CFR 268, WIPP RCRA Part B Application Waste Analysis Plan (WAP), and WIPP Waste Characterization Program Plan (WWCP). This Project Management Plan (PMP) addresses only the characterization of the contact handled (CH) TRU waste at the INEL. This document will address all work in which EG&G Idaho is responsible concerning the INEL WETP. Even though EG&G Idaho has no responsibility for the work that ANL-W is performing, EG&G Idaho will keep a current status and provide a project coordination effort with ANL-W to ensure that the INEL, as a whole, is effectively and efficiently completing the requirements for WETP.

  14. ANALYSIS OF THE TANK 6F FINAL CHARACTERIZATION SAMPLES-2012

    SciTech Connect (OSTI)

    Oji, L.; Diprete, D.; Coleman, C.; Hay, M.; Shine, G.

    2012-06-28

    The Savannah River National Laboratory (SRNL) was requested by Savannah River Remediation (SRR) to provide sample preparation and analysis of the Tank 6F final characterization samples to determine the residual tank inventory prior to grouting. Fourteen residual Tank 6F solid samples from three areas on the floor of the tank were collected and delivered to SRNL between May and August 2011. These Tank 6F samples were homogenized and combined into three composite samples based on a proportion compositing scheme and the resulting composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 6F composite samples include bulk density and water leaching of the solids to account for water soluble components. The composite Tank 6F samples were analyzed and the data reported in triplicate. Sufficient quality assurance standards and blanks were utilized to demonstrate adequate characterization of the Tank 6F samples. The main evaluation criteria were target detection limits specified in the technical task request document. While many of the target detection limits were met for the species characterized for Tank 6F some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The isotopes whose detection limits were not met in all cases included Sn-126, Sb-126, Sb-126m, Eu-152, Cm-243 and Cf-249. SRNL, in conjunction with the customer, reviewed all of these cases and determined that the impacts of not meeting the target detection limits were acceptable. Based on the analyses of variance (ANOVA) for the inorganic constituents of Tank 6F, all the inorganic constituents displayed heterogeneity. The inorganic results demonstrated consistent differences across the composite samples: lowest concentrations for Composite Sample 1, intermediate-valued concentrations for Composite Sample 2, and highest concentrations for Composite Sample 3. The Hg and Mo results suggest possible measurement outliers. However, the magnitudes of the differences between the Hg 95% upper confidence limit (UCL95) results with and without the outlier and the magnitudes of the differences between the Mo UCL95 results with and without the outlier do not appear to have practical significance. It is recommended to remove the potential measurement outliers. Doing so is conservative in the sense of producing a higher UCL95 for Hg and Mo than if the potential outliers were included in the calculations. In contrast to the inorganic results, most of the radionuclides did not demonstrate heterogeneity among the three Tank 6F composite sample characterization results.

  15. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    SciTech Connect (OSTI)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-02-12

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  16. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report January 1 - March 31, 2005

    SciTech Connect (OSTI)

    DL Sisterson

    2005-03-31

    Description. Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year dating back to 1998. The United States Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for this second quarter for the Southern Great Plains (SGP) site is 2052 hours (0.95 2,160 hours this quarter). The annual OPSMAX for the North Slope Alaska (NSA) site is 1944 hours (0.90 2,160), and that for the Tropical Western Pacific (TWP) site is 1836 hours (0.85 2,160). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the ACRF Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 90 days for this quarter) the instruments were operating this quarter.

  17. Atmospheric Radiation Measurement Program Climate Research Facility Operations Cumulative Quarterly Report October 1, 2003 - September 30, 2004

    SciTech Connect (OSTI)

    DL Sisterson

    2004-09-30

    Description. Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The United States Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The annual OPSMAX time for the Southern Great Plains (SGP) site is 8,322 hours per year (0.95 8,760, the number hours in a year, not including leap year). The annual OPSMAX for the North Slope Alaska (NSA) site is 7,884 hours per year (0.90 8,760), and that for the Tropical Western Pacific (TWP) site is 7,446 hours per year (0.85 8,760). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the ACRF Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 365 days per year) the instruments were operating.

  18. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report April 1 - June 30, 2005

    SciTech Connect (OSTI)

    DL Sisterson

    2005-06-30

    Description. Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year dating back to 1998. The United States Department of Energy requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter for the Southern Great Plains (SGP) site is 2,074.8 hours (0.95 2,184 hours this quarter). The annual OPSMAX for the North Slope Alaska (NSA) site is 1,965.6 hours (0.90 2,184), and that for the Tropical Western Pacific (TWP) site is 1,856.4 hours (0.85 2,184). The OPSMAX time for the ARM Mobile Facility (AMF) is 2,074.8 (0.95 2,184). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the ACRF Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter

  19. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report July 1 - September 30, 2005

    SciTech Connect (OSTI)

    DL Sisterson

    2005-09-30

    Description. Individual raw data streams from instrumentation at the ACRF fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at PNNL for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year dating back to 1998. The DOE requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter for the Southern Great Plains (SGP) site is 2,097.6 hours (0.95 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) site is 1,987.2 hours (0.90 2,208), and that for the Tropical Western Pacific (TWP) site is 1,876.8 hours (0.85 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is 2,097.6 hours (0.95 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the ACRF Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percent of data in the Archive represents the average percent of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter.

  20. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report July 1 September 30, 2008

    SciTech Connect (OSTI)

    DL Sisterson

    2008-09-30

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the fourth quarter of FY 2008 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 ? 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 ? 2,208), and for the Tropical Western Pacific (TWP) locale is 1,876.80 hours (0.85 ? 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because the data have not yet been released from China to the DMF for processing. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter.