National Library of Energy BETA

Sample records for methodology sampling error

  1. Development of methodology to correct sampling error associated with FRM PM10 samplers 

    E-Print Network [OSTI]

    Chen, Jing

    2009-05-15

    to correct the sampling error associated with the FRM PM10 sampler: (1) wind tunnel testing facilities and protocol for experimental evaluation of samplers; (2) the variation of the oversampling ratios of FRM PM10 samplers for computational evaluation...

  2. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect (OSTI)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.

  3. Design Methodology to trade off Power, Output Quality and Error Resiliency: Application to Color Interpolation Filtering

    E-Print Network [OSTI]

    Kambhampati, Subbarao

    Design Methodology to trade off Power, Output Quality and Error Resiliency: Application to Color,nbanerje,kaushik}@purdue.edu chaitali@asu.edu Abstract: Power dissipation and tolerance to process variations pose conflicting design-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems

  4. A new application methodology of the Fourier transform for rational approximation of the complex error function

    E-Print Network [OSTI]

    S. M. Abrarov; B. M. Quine

    2015-11-03

    This paper presents a new approach in application of the Fourier transform to the complex error function resulting in an efficient rational approximation. Specifically, the computational test shows that with only $17$ summation terms the obtained rational approximation of the complex error function provides accuracy ${10^{ - 15}}$ over the most domain of practical importance $0 \\le x \\le 40,000$ and ${10^{ - 4}} \\le y \\le {10^2}$ required for the HITRAN-based spectroscopic applications. Since the rational approximation does not contain trigonometric or exponential functions dependent upon the input parameters $x$ and $y$, it is rapid in computation. Such an example demonstrates that the considered methodology of the Fourier transform may be advantageous in practical applications.

  5. Sample size in factor analysis: The role of model error

    E-Print Network [OSTI]

    MacCallum, R. C.; Widaman, K. F.; Preacher, Kristopher J.; Hong, Sehee

    2001-01-01

    Equation 1: (2) H9018 yy = H9011H9021H9011H11032 + H9008 2 where H9018 yy is the p ? p population covariance matrix for the measured variables and H9021 is the r ? r population correlation matrix for the common factors (assuming factors are standardized... in the population). This is the standard version of the common factor model for a population covariance matrix. Following similar algebraic procedures, we could derive a structure for a sample covariance matrix, C yy . However, in such a derivation we can...

  6. Methodology to quantify leaks in aerosol sampling system components 

    E-Print Network [OSTI]

    Vijayaraghavan, Vishnu Karthik

    2004-11-15

    and that approach was used to measure the sealing integrity of a CAM and two kinds of filter holders. The methodology involves use of sulfur hexafluoride as a tracer gas with the device being tested operated under dynamic flow conditions. The leak rates...

  7. DIESEL AEROSOL SAMPLING METHODOLOGY -CRC E-43 EXECUTIVE SUMMARY

    E-Print Network [OSTI]

    Minnesota, University of

    ) was to develop Diesel aerosol sampling methods for the laboratory that would produce particle size distributions used to evaluate and select basic options, or to perform feasibility studies or preliminary assessments, and dilution with ambient air. A small amount of these nuclei mode particles contain solid ash from lube oil

  8. Quantifying Errors Associated with Satellite Sampling of Offshore Wind S.C. Pryor1,2

    E-Print Network [OSTI]

    1 Quantifying Errors Associated with Satellite Sampling of Offshore Wind Speeds S.C. Pryor1,2 , R, Bloomington, IN47405, USA. Tel: 1-812-855-5155. Fax: 1-812-855-1661 Email: spryor@indiana.edu 2 Dept. of Wind an attractive proposition for measuring wind speeds over the oceans because in principle they also offer

  9. Error Detection Techniques Applicable in an Architecture Framework and Design Methodology for

    E-Print Network [OSTI]

    Ould Ahmedou, Mohameden

    /environmental variations and external radiation caus- ing so-called soft-errors. Overall, these trends result in a severe in analogy to the IP library of the functional layer shall eventually represent an autonomic IP library (AE

  10. Riebe et al., p. 1 Appendix 1: Sampling Rationale and Cosmogenic Nuclide Methodology

    E-Print Network [OSTI]

    Kirchner, James W.

    Riebe et al., p. 1 Appendix 1: Sampling Rationale and Cosmogenic Nuclide Methodology (Supplemental methods for inferring whole-catchment denudation rates from cosmogenic nuclide concentrations in the quartz fraction of stream sediment samples, (9) the cosmogenic nuclide production rates that we used, (10

  11. Self-Test Methodology for At-Speed Test of Crosstalk in Chip Interconnects The effect of crosstalk errors is most significant in high-

    E-Print Network [OSTI]

    California at San Diego, University of

    Self-Test Methodology for At-Speed Test of Crosstalk in Chip Interconnects Abstract The effect of crosstalk errors is most significant in high- performance circuits, mandating at-speed testing for crosstalk defects. This paper describes a self-test methodology that we have developed to enable on-chip at

  12. The U-tube sampling methodology and real-time analysis of geofluids

    SciTech Connect (OSTI)

    Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

    2009-03-01

    The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood [1973], provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO{sub 2} storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO{sub 2} from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO{sub 2} storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

  13. Estimation of the error for small-sample optimal binary filter design using prior knowledge 

    E-Print Network [OSTI]

    Sabbagh, David L

    1999-01-01

    Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...

  14. Cost-Sensitive Learning vs. Sampling: Which is Best for Handling Unbalanced Classes with Unequal Error Costs?

    E-Print Network [OSTI]

    Weiss, Gary

    Cost-Sensitive Learning vs. Sampling: Which is Best for Handling Unbalanced Classes with Unequal Error Costs? Gary M. Weiss, Kate McCarthy, and Bibi Zabar Department of Computer and Information Science cost than the majority class). In this paper we compare three methods for dealing with data that has

  15. On the Relationship Between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis Functions

    E-Print Network [OSTI]

    Niyogi, Partha

    1994-02-01

    In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show ...

  16. Sampling and Census 2000--Methodological Issues: Report to the Donner Foundation

    E-Print Network [OSTI]

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 B.7 Migration in hopes of resolving ongoing technical disputes. Our primary mission is to discuss key methodological

  17. DEVELOPMENT OF METHODOLOGY AND FIELD DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE

    SciTech Connect (OSTI)

    Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

    2012-06-04

    This project developed methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of the fuel storage medium and determine the oxide thickness on the spent fuel basin materials. The overall objective of this project was to determine the amount of time fuel has spent in a storage basin to determine if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations. This project developed and validated forensic tools that can be used to predict the age and condition of spent nuclear fuels stored in liquid basins based on key physical, chemical and microbiological basin characteristics. Key parameters were identified based on a literature review, the parameters were used to design test cells for corrosion analyses, tools were purchased to analyze the key parameters, and these were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The key parameters identified in the literature review included chloride concentration, conductivity, and total organic carbon level. Focus was also placed on aluminum based cladding because of their application to weapons production. The literature review was helpful in identifying important parameters, but relationships between these parameters and corrosion rates were not available. Bench scale test systems were designed, operated, harvested, and analyzed to determine corrosion relationships between water parameters and water conditions, chemistry and microbiological conditions. The data from the bench scale system indicated that corrosion rates were dependent on total organic carbon levels and chloride concentrations. The highest corrosion rates were observed in test cells amended with sediment, a large microbial inoculum and an organic carbon source. A complete characterization test kit was field tested to characterize the SRS L-Area spent fuel basin. The sampling kit consisted of a TOC analyzer, a YSI multiprobe, and a thickness probe. The tools were field tested to determine their ease of use, reliability, and determine the quality of data that each tool could provide. Characterization was done over a two day period in June 2011, and confirmed that the L Area basin is a well operated facility with low corrosion potential.

  18. Dynamic Planning and control Methodology : understanding and managing iterative error and change cycles in large-scale concurrent design and construction projects

    E-Print Network [OSTI]

    Lee, Sang Hyun, 1973-

    2006-01-01

    Construction projects are uncertain and complex in nature. One of the major driving forces that may account for these characteristics is iterative cycles caused by errors and changes. Errors and changes worsen project ...

  19. Error detection method

    DOE Patents [OSTI]

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  20. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    SciTech Connect (OSTI)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy?s extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling directly from the large Tank Farm tanks is a difficult, if not unsolvable enterprise due to li

  1. arXiv:astro-ph/0402442v227Feb2004 Weak Lensing of the CMB: Sampling Errors on B-modes

    E-Print Network [OSTI]

    Hu, Wayne

    such as the neutrino mass and dark energy equation of state. The net sample variance on the small scale B modes out the dark side of the universe, namely the dark energy and neutrino dependent growth of structure, as well also provides the key to mapping the dark matter [6, 7] and hence the separation of the lensing

  2. Photometric Redshifts and Photometry Errors

    E-Print Network [OSTI]

    D. Wittman; P. Riechers; V. E. Margoniner

    2007-09-21

    We examine the impact of non-Gaussian photometry errors on photometric redshift performance. We find that they greatly increase the scatter, but this can be mitigated to some extent by incorporating the correct noise model into the photometric redshift estimation process. However, the remaining scatter is still equivalent to that of a much shallower survey with Gaussian photometry errors. We also estimate the impact of non-Gaussian errors on the spectroscopic sample size required to verify the photometric redshift rms scatter to a given precision. Even with Gaussian {\\it photometry} errors, photometric redshift errors are sufficiently non-Gaussian to require an order of magnitude larger sample than simple Gaussian statistics would indicate. The requirements increase from this baseline if non-Gaussian photometry errors are included. Again the impact can be mitigated by incorporating the correct noise model, but only to the equivalent of a survey with much larger Gaussian photometry errors. However, these requirements may well be overestimates because they are based on a need to know the rms, which is particularly sensitive to tails. Other parametrizations of the distribution may require smaller samples.

  3. Integrated fiducial sample mount and software for correlated microscopy

    SciTech Connect (OSTI)

    Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge; Karen E. Wright

    2014-02-01

    A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mount methodology allows data to be efficiently and easily collected on different instruments for the same sample location.

  4. Representation of the Fourier transform as a weighted sum of the complex error functions

    E-Print Network [OSTI]

    S. M. Abrarov; B. M. Quine

    2015-08-05

    In this paper we show that a methodology based on a sampling with the Gaussian function of kind $h\\,{e^{ - {{\\left( {t/c} \\right)}^2}}}/\\left( {{c}\\sqrt \\pi } \\right)$, where ${c}$ and $h$ are some constants, leads to the Fourier transform that can be represented as a weighted sum of the complex error functions. Due to remarkable property of the complex error function, the Fourier transform based on the weighted sum can be significantly simplified and expressed in terms of a damping harmonic series. In contrast to the conventional discrete Fourier transform, this methodology results in a non-periodic wavelet approximation. Consequently, the proposed approach may be useful and convenient in algorithmic implementation.

  5. METHODOLOGICAL REPORT MICHIGAN STATE UNIVERSITY

    E-Print Network [OSTI]

    Riley, Shawn J.

    . The only exception to this was that the Committee wished to sample the city of Detroit as a stratum, Wayne [excluding Detroit]) 7. Detroit City #12;iv REGION Number of Cases Margin of Sampling Error Upper.9% Southwest 108 + 9.5% Southeast 271 + 6.0% Detroit 120 + 9.0% Statewide Total 1,001 + 3.1% As a result

  6. Analyzing sampling methodologies in semiconductor manufacturing

    E-Print Network [OSTI]

    Anthony, Richard M. (Richard Morgan), 1971-

    2004-01-01

    This thesis describes work completed during an internship assignment at Intel Corporation's process development and wafer fabrication manufacturing facility in Santa Clara, California. At the highest level, this work relates ...

  7. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    E-Print Network [OSTI]

    Camp, Charles H; Cicerone, Marcus T

    2015-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending ...

  8. Human error contribution to nuclear materials-handling events

    E-Print Network [OSTI]

    Sutton, Bradley (Bradley Jordan)

    2007-01-01

    This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...

  9. Monte Carlo errors with less errors

    E-Print Network [OSTI]

    Ulli Wolff

    2006-11-29

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  10. QUANTUM ERROR Osbert Bastani

    E-Print Network [OSTI]

    Reversible (unitary) Ancillary qbits Controlled gates (cX, cZ) #12;Measurement Deterministic Duplication;Decoding use ancillary bits to determine what error occurred #12;Decoding use ancillary bits to determine what error occurred set to 0 if first two bits equal, set to 1 if not #12;Decoding use ancillary bits

  11. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  12. Thermodynamics of error correction

    E-Print Network [OSTI]

    Pablo Sartori; Simone Pigolotti

    2015-04-24

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  13. Quantum error control codes 

    E-Print Network [OSTI]

    Abdelhamid Awad Aly Ahmed, Sala

    2008-10-10

    by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the O–ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major Subject: Computer Science QUANTUM ERROR CONTROL CODES A... Members, Mahmoud M. El-Halwagi Anxiao (Andrew) Jiang Rabi N. Mahapatra Head of Department, Valerie Taylor May 2008 Major Subject: Computer Science iii ABSTRACT Quantum Error Control Codes. (May 2008) Salah Abdelhamid Awad Aly Ahmed, B.Sc., Mansoura...

  14. Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity

    Broader source: Energy.gov [DOE]

    Discussed development of a methodology for creating accurate soot models for soot samples from various origins with minimal characterization

  15. DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR

    E-Print Network [OSTI]

    Oldenburg, Douglas W.

    DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy-UXOs dug per UXO). The discrimination and classification methodology depends on the magnitude of the recov

  16. Methodological Research Future Work

    E-Print Network [OSTI]

    Wolfe, Patrick J.

    Outline Background Methodological Research Results Future Work New Dataset 1878 PCA for 1000 rmfs Background Methodological Research Results Future Work New Dataset 1878 PCA for 1000 rmfs Background Quasar Analysis Future Work Doubly-intractable Distribution Other Calibration Uncertainty New Dataset

  17. Error Dynamics: The Dynamic Emergence of Error Avoidance and

    E-Print Network [OSTI]

    Bickhard, Mark H.

    . Standard such notions are, however, arguably limited and bad notions, being based on untenable models of learning about error and of handling error knowledge constitute a complex major theme in evolution VICARIANTS Avoiding Error. The central theme is a progressive elaboration of kinds of dynamics that manage

  18. FNR 3410C Natural Resource Sampling FNR 3410C -NATURAL RESOURCE SAMPLING

    E-Print Network [OSTI]

    Hill, Jeffrey E.

    of sampling. Design of cost-effective sample surveys. Sampling methodology applicable 0211 (Mechanical & Aerospace Engineering) Lab 1 Period 7-9 13:55-16:55 M BLK begins with a review of elementary statistics and continues with specific

  19. DOE Systems Engineering Methodology

    Office of Environmental Management (EM)

    Systems Engineering Methodology (SEM) In-Stage Assessment Process Guide Version 3 September 2002 U.S. Department of Energy Office of the Chief Information Officer In-Stage...

  20. Register file soft error recovery

    DOE Patents [OSTI]

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  1. New Methodology for Natural Gas Production Estimates

    Reports and Publications (EIA)

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  2. Design methodology for tracking certain and uncertain non-minimum phase systems 

    E-Print Network [OSTI]

    DeVoucalla, George David

    1997-01-01

    A design methodology is developed for obtaining tracking controllers for non-minimum phase systems. The discussion centers around a solution for the error equation E(s) = [1- H(s)] Yd(S)'Tracking error is eliminated by making [1-H(s)] orthogonal...

  3. PRECIPITATION DOWNSCALING: METHODOLOGIES AND

    E-Print Network [OSTI]

    Foufoula-Georgiou, Efi

    PRECIPITATION DOWNSCALING: METHODOLOGIES AND HYDROLOGIC APPLICATIONS Efi Foufoula-Georgiou St/or other information. #12;PREMISES OF STATISTICAL DOWNSCALING Precipitation exhibits space-time variability) There is a substantial evidence to suggest that despite the very complex patterns of precipitation

  4. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect (OSTI)

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information (sensitivity result) to reduce sampling number. (4) Allowing grid independence for scaled integral effect test (IET) simulation and real plant applications: (a) eliminate numerical uncertainty on scaling; (b) reduce experimental cost by allowing smaller scaled IET; (c) eliminate user effects. This paper will review the issues related to the current CSAU, introduce FSA, discuss a potential Q-PIRT process, and show simple examples to perform FSA. Finally, the general research direction and requirements to use FSA in a system analysis code will be discussed.

  5. Pressure Change Measurement Leak Testing Errors

    SciTech Connect (OSTI)

    Pryor, Jeff M; Walker, William C

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  6. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore »rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. « less

  7. BASF's Energy Survey Methodology 

    E-Print Network [OSTI]

    Theising, T. R.

    2005-01-01

    Methodology Thomas R. Theising BASF Corporation operates several dozen manufacturing Sites within NAFTA and periodically conducts Energy Surveys at each Site. Although these manufacturing sites represent a variety of industries, BASF has found success.... The discovery phase is the ?Energy Survey? itself. These Surveys involve plant personnel from various functions within Site operations. BASF applies the Pareto Principle in a variety of fashions: 20% of the operation consumes 80% of the energy, lets focus...

  8. Software Function Allocation Methodology 

    E-Print Network [OSTI]

    O'Neal, Michael Ralph

    1988-01-01

    ABSTRACT Software Function Allocation Methodology. (May 1988) Michael Ralph O'Neal, B. S. , Texas A&M University Chairman of Advisory Committee: Dr. William Lively Modern distributed computer systems are very powerful and useful; they also offer new... software system designers with a thorough and flexible method to allocate software functions among the hardware components of a distributed computer system. Software designers select and rank relevant Design Parameters, analyze how well different...

  9. Electronic Survey Methodology Page 1 Electronic Survey Methodology

    E-Print Network [OSTI]

    Nonnecke, Blair

    Electronic Survey Methodology Page 1 Electronic Survey Methodology: A Case Study in Reaching Hard, Maryland preece@umbc.edu 2002 © Andrews, Nonnecke and Preece #12;Electronic Survey Methodology Page 2 Conducting Research on the Internet: Electronic survey Design, Development and Implementation Guidelines

  10. DATA COMPRESSION USING WAVELETS: ERROR ...

    E-Print Network [OSTI]

    1910-90-11

    algorithms that introduce differences between the original and compressed data in ... to choose an error metric that parallels the human visual system, so that image .... signal data along a communications channel, one sends integer codes that ...

  11. The Challenge of Quantum Error Correction.

    E-Print Network [OSTI]

    Fominov, Yakov

    in the design of physical bits. #12;What we need Hardware requirements: 1. Many 103-104 / R individual bits (R flip classical error b. Phase error 0exp( ( ) )z i E t dt = - Fluctuates 1. Need hardware error #12;Classical error correction by the software and hardware. , / 2 0 Hardware error correction: Ising

  12. Unequal error protection of subband coded bits 

    E-Print Network [OSTI]

    Devalla, Badarinath

    1994-01-01

    Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...

  13. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymore »the photometric redshifts with z ? 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs – zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  14. BayesianScore(BDeu) Ave. Bayesian Score Results -Child -Sample Size 500

    E-Print Network [OSTI]

    Brown, Laura E.

    GS PC TPDA GES BayesianScore(BDeu) Ave. Bayesian Score Results - Child - Sample Size 500 Error Bars GS PC TPDA GES BayesianScore(BDeu) Ave. Bayesian Score Results - Child3 - Sample Size 500 Error Bars TPDA GES BayesianScore(BDeu) Ave. Bayesian Score Results - Child5 - Sample Size 500 Error Bars

  15. Communication error detection using facial expressions

    E-Print Network [OSTI]

    Wang, Sy Bor, 1976-

    2008-01-01

    Automatic detection of communication errors in conversational systems typically rely only on acoustic cues. However, perceptual studies have indicated that speakers do exhibit visual communication error cues passively ...

  16. WRAP Module 1 sampling and analysis plan

    SciTech Connect (OSTI)

    Mayancsik, B.A.

    1995-03-24

    This document provides the methodology to sample, screen, and analyze waste generated, processed, or otherwise the responsibility of the Waste Receiving and Processing Module 1 facility. This includes Low-Level Waste, Transuranic Waste, Mixed Waste, and Dangerous Waste.

  17. Health Dialog Systems Methodological Review 1 Methodological Review

    E-Print Network [OSTI]

    Bickmore, Timothy

    Health Dialog Systems Methodological Review 1 Methodological Review: Health Dialog Systems special issue on Dialog Systems for Health Communication Corresponding author: Timothy W. Bickmore, Massachusetts 02115 bickmore@ccs.neu.edu Phone: (617) 373-5477 FAX: 617-812-2589 #12;Health Dialog Systems

  18. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect (OSTI)

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  19. JOURNAL OF GEOPHYSICAL RESEARCH, VOL. ???, XXXX, DOI:10.1029/, Characterization of transport errors in chemical

    E-Print Network [OSTI]

    Liu, Hongyu

    IN A CHEMICAL TRANSPORT MODEL Abstract. We propose a new methodology to characterize errors in chemical forecasts from a global tropospheric chemical transport model I. Bey Swiss Federal Institute in the representation of transport processes in chemical transport models. We con- strain the evaluation of a global

  20. Sampling box

    DOE Patents [OSTI]

    Phillips, Terrance D. (617 Chestnut Ct., Aiken, SC 29803); Johnson, Craig (100 Midland Rd., Oak Ridge, TN 37831-0895)

    2000-01-01

    An air sampling box that uses a slidable filter tray and a removable filter cartridge to allow for the easy replacement of a filter which catches radioactive particles is disclosed.

  1. Kernel Regression in the Presence of Correlated Errors Kernel Regression in the Presence of Correlated Errors

    E-Print Network [OSTI]

    Kernel Regression in the Presence of Correlated Errors Kernel Regression in the Presence in nonparametric regression is difficult in the presence of correlated errors. There exist a wide variety vector machines for regression. Keywords: nonparametric regression, correlated errors, bandwidth choice

  2. Non-Gaussian numerical errors versus mass hierarchy

    E-Print Network [OSTI]

    Y. Meurice; M. B. Oktay

    2000-05-12

    We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.

  3. Sampling apparatus

    DOE Patents [OSTI]

    Gordon, N.R.; King, L.L.; Jackson, P.O.; Zulich, A.W.

    1989-07-18

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface. 15 figs.

  4. Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error 

    E-Print Network [OSTI]

    Ma, Liang

    2012-02-14

    (MIL-HDBK-338B, 2008). The report entitled ?Failure Mode/Mechanism Distribution 1997? offers abundant failure mode data defined alternatively for equipment, actuators, etc. In this report, short, open and drift are all considered failure modes..., 2010). The least common failure mode is the shorted circuit, which comprises 15% of thermistor failures. The Electronic Reliability Design Handbook (MIL-HDBK-338B, 2008) uses a different classification system for thermistor failure modes classifying...

  5. Measurement of laminar burning speeds and Markstein lengths using a novel methodology

    SciTech Connect (OSTI)

    Tahtouh, Toni; Halter, Fabien; Mounaim-Rousselle, Christine [Institut PRISME, Universite d'Orleans, 8 rue Leonard de Vinci-45072, Orleans Cedex 2 (France)

    2009-09-15

    Three different methodologies used for the extraction of laminar information are compared and discussed. Starting from an asymptotic analysis assuming a linear relation between the propagation speed and the stretch acting on the flame front, temporal radius evolutions of spherically expanding laminar flames are postprocessed to obtain laminar burning velocities and Markstein lengths. The first methodology fits the temporal radius evolution with a polynomial function, while the new methodology proposed uses the exact solution of the linear relation linking the flame speed and the stretch as a fit. The last methodology consists in an analytical resolution of the problem. To test the different methodologies, experiments were carried out in a stainless steel combustion chamber with methane/air mixtures at atmospheric pressure and ambient temperature. The equivalence ratio was varied from 0.55 to 1.3. The classical shadowgraph technique was used to detect the reaction zone. The new methodology has proven to be the most robust and provides the most accurate results, while the polynomial methodology induces some errors due to the differentiation process. As original radii are used in the analytical methodology, it is more affected by the experimental radius determination. Finally, laminar burning velocity and Markstein length values determined with the new methodology are compared with results reported in the literature. (author)

  6. Approximate error conjugation gradient minimization methods

    DOE Patents [OSTI]

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  7. CHEMICAL LABORATORY SAFETY AND METHODOLOGY

    E-Print Network [OSTI]

    Northern British Columbia, University of

    CHEMICAL LABORATORY SAFETY AND METHODOLOGY MANUAL August 2013 #12;ii Emergency Numbers UNBC Prince-Emergency Numbers UNBC Prince George Campus Chemstores 6472 Chemical Safety 6472 Radiation Safety 6472 Biological the safe use, storage, handling, waste and emergency management of chemicals on the University of Northern

  8. ARM PROCESSES AND MODELING METHODOLOGY

    E-Print Network [OSTI]

    ARM PROCESSES AND MODELING METHODOLOGY Benjamin Melamed Rutgers University Faculty of Management Department of MSIS 94 Rockafeller Rd. Piscataway, NJ 08854 melamed@rbs.rutgers.edu ABSTRACT ARM (Auto innovation sequences, ARM processes admit dependent innovation sequences as well, so long

  9. Methodology for completing Hanford 200 Area tank waste physical/chemical profile estimations

    SciTech Connect (OSTI)

    Kruger, A.A.

    1996-04-29

    The purpose of the Methodology for Completing Hanford 200 Area Tank Waste Physical/Chemical Profile Estimations is to capture the logic inherent to completing 200 Area waste tank physical and chemical profile estimates. Since there has been good correlation between the estimate profiles and actual conditions during sampling and sub-segment analysis, it is worthwhile to document the current estimate methodology.

  10. Verification of unfold error estimates in the unfold operator code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  11. Group representations, error bases and quantum codes

    SciTech Connect (OSTI)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  12. On a fatal error in tachyonic physics

    E-Print Network [OSTI]

    Edward Kapu?cik

    2013-08-10

    A fatal error in the famous paper on tachyons by Gerald Feinberg is pointed out. The correct expressions for energy and momentum of tachyons are derived.

  13. Adjoint Error Estimation for Elastohydrodynamic Lubrication

    E-Print Network [OSTI]

    Jimack, Peter

    Adjoint Error Estimation for Elastohydrodynamic Lubrication by Daniel Edward Hart Submitted elastohydro- dynamic lubrication (EHL) problems. A functional is introduced, namely the friction

  14. Measure of Diffusion Model Error for Thermal Radiation Transport 

    E-Print Network [OSTI]

    Kumar, Akansha

    2013-04-19

    and computational time. However, this approximation often has significant error. Error due to the inherent nature of a physics model is called model error. Information about the model error associated with the diffusion approximation is clearly desirable...

  15. WIPP Weatherization: Common Errors and Innovative Solutions Presentati...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    WIPP Weatherization: Common Errors and Innovative Solutions Presentation WIPP Weatherization: Common Errors and Innovative Solutions Presentation This presentation contains...

  16. Cogeneration Assessment Methodology for Utilities 

    E-Print Network [OSTI]

    Sedlik, B.

    1983-01-01

    will cause the errors to vary in an unpredictable fashion. 312 ESL-IE-83-04-48 Proceedings from the Fifth Industrial Energy Conservation Technology Conference Volume 1, Houston, TX, April 17-20, 1983 MAIL SURVEY COGENERATION NEVER CONSIDERED COGENERATORS... Structure Figure 5 presents the overall Dames &. Moor l survey design. There are several salient features associated with this multistage approach. The thre~ principal stages are a preliminary mail questionnaire tq all large demand customers...

  17. Energy Efficiency Indicators Methodology Booklet

    SciTech Connect (OSTI)

    Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

    2010-05-01

    This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

  18. Inference for Model Error Allan Seheult

    E-Print Network [OSTI]

    Oakley, Jeremy

    Reservoirs, Model Error, Reification, Thermohaline Circulation. 1 Introduction Mathematical models of complex that the uncertainties associated with both calibrating a mathematical model to observations on a physical system specification exercise of model error with the cosmologists, linked to an extensive analysis of model

  19. Nonparametric Regression with Correlated Errors Jean Opsomer

    E-Print Network [OSTI]

    Wang, Yuedong

    Nonparametric Regression with Correlated Errors Jean Opsomer Iowa State University Yuedong Wang Nonparametric regression techniques are often sensitive to the presence of correlation in the errors splines and wavelet regression under correlation, both for short-range and long-range dependence

  20. Remarks on statistical errors in equivalent widths

    E-Print Network [OSTI]

    Klaus Vollmann; Thomas Eversberg

    2006-07-03

    Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.

  1. Characterizing Application Memory Error Vulnerability to

    E-Print Network [OSTI]

    Mutlu, Onur

    -reliability memory (HRM) Store error-tolerant data in less-reliable lower-cost memory Store error-vulnerable data an application Observation 2: Data can be recovered by software ·Heterogeneous-Reliability Memory (HRM: Data can be recovered by software ·Heterogeneous-Reliability Memory (HRM) ·Evaluation 4 #12;Server

  2. Formalism for Simulation-based Optimization of Measurement Errors in High Energy Physics

    E-Print Network [OSTI]

    Yuehong Xie

    2009-04-29

    Miminizing errors of the physical parameters of interest should be the ultimate goal of any event selection optimization in high energy physics data analysis involving parameter determination. Quick and reliable error estimation is a crucial ingredient for realizing this goal. In this paper we derive a formalism for direct evaluation of measurement errors using the signal probability density function and large fully simulated signal and background samples without need for data fitting and background modelling. We illustrate the elegance of the formalism in the case of event selection optimization for CP violation measurement in B decays. The implication of this formalism on choosing event variables for data analysis is discussed.

  3. Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology to Identify Potential Reductions in Utility and Process Energy Consumption Solutia: Massachusetts Chemical...

  4. Methodology for Estimating Solar Potential on Multiple Building Rooftops for Photovoltaic Systems

    SciTech Connect (OSTI)

    Kodysh, Jeffrey B [ORNL; Omitaomu, Olufemi A [ORNL; Bhaduri, Budhendra L [ORNL; Neish, Bradley S [ORNL

    2013-01-01

    In this paper, a methodology for estimating solar potential on multiple building rooftops is presented. The objective of this methodology is to estimate the daily or monthly solar radiation potential on individual buildings in a city/region using Light Detection and Ranging (LiDAR) data and a geographic information system (GIS) approach. Conceptually, the methodology is based on the upward-looking hemispherical viewshed algorithm, but applied using an area-based modeling approach. The methodology considers input parameters, such as surface orientation, shadowing effect, elevation, and atmospheric conditions, that influence solar intensity on the earth s surface. The methodology has been implemented for some 212,000 buildings in Knox County, Tennessee, USA. Based on the results obtained, the methodology seems to be adequate for estimating solar radiation on multiple building rooftops. The use of LiDAR data improves the radiation potential estimates in terms of the model predictive error and the spatial pattern of the model outputs. This methodology could help cities/regions interested in sustainable projects to quickly identify buildings with higher potentials for roof-mounted photovoltaic systems.

  5. Agility metric sensitivity using linear error theory 

    E-Print Network [OSTI]

    Smith, David Matthew

    2000-01-01

    Aircraft agility metrics have been proposed for use to measure the performance and capability of aircraft onboard while in-flight. The sensitivity of these metrics to various types of errors and uncertainties is not ...

  6. Quantum Error Correction for Quantum Memories

    E-Print Network [OSTI]

    Barbara M. Terhal

    2015-04-10

    Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.

  7. Simulating Bosonic Baths with Error Bars

    E-Print Network [OSTI]

    Mischa P. Woods; M. Cramer; M. B. Plenio

    2015-04-07

    We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths the truncation of both, the number of modes and the local Hilbert-space dimensions is necessary. We derive super-exponential Lieb--Robinson-type bounds on the error when restricting the bath to finitely-many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.

  8. Errors and paradoxes in quantum mechanics

    E-Print Network [OSTI]

    D. Rohrlich

    2007-08-28

    Errors and paradoxes in quantum mechanics, entry in the Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy, ed. F. Weinert, K. Hentschel, D. Greenberger and B. Falkenburg (Springer), to appear

  9. Quantum error-correcting codes and devices

    DOE Patents [OSTI]

    Gottesman, Daniel (Los Alamos, NM)

    2000-10-03

    A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.

  10. Organizational Errors: Directions for Future Research

    E-Print Network [OSTI]

    Carroll, John Stephen

    The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...

  11. Quantifying truncation errors in effective field theory

    E-Print Network [OSTI]

    R. J. Furnstahl; N. Klco; D. R. Phillips; S. Wesolowski

    2015-06-03

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Mei{\\ss}ner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered, and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.

  12. Evaluating operating system vulnerability to memory errors.

    SciTech Connect (OSTI)

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  13. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single path—the shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

  14. DOE Systems Engineering Methodology (SEM): Stage Exit V3 | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Engineering Methodology (SEM): Stage Exit V3 DOE Systems Engineering Methodology (SEM): Stage Exit V3 The DOE Systems Engineering Methodology (SEM) describes the standard...

  15. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore »we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  16. Simulation Enabled Safeguards Assessment Methodology

    SciTech Connect (OSTI)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  17. Methodology for flammable gas evaluations

    SciTech Connect (OSTI)

    Hopkins, J.D., Westinghouse Hanford

    1996-06-12

    There are 177 radioactive waste storage tanks at the Hanford Site. The waste generates flammable gases. The waste releases gas continuously, but in some tanks the waste has shown a tendency to trap these flammable gases. When enough gas is trapped in a tank`s waste matrix, it may be released in a way that renders part or all of the tank atmosphere flammable for a period of time. Tanks must be evaluated against previously defined criteria to determine whether they can present a flammable gas hazard. This document presents the methodology for evaluating tanks in two areas of concern in the tank headspace:steady-state flammable-gas concentration resulting from continuous release, and concentration resulting from an episodic gas release.

  18. Investigating SANS/CWE Top 25 Programming Errors. 1 Investigating the SANS/CWE Top 25 Programming Errors List

    E-Print Network [OSTI]

    Hamlen, Kevin W.

    Investigating SANS/CWE Top 25 Programming Errors. 1 Investigating the SANS/CWE Top 25 Programming Errors List Running Title: Investigating SANS/CWE Top 25 Programming Errors. Investigating the SANS;Investigating SANS/CWE Top 25 Programming Errors. 2 Investigating the SANS/CWE Top 25 Programming Errors List

  19. StructuralHammingDistance Average SHD Results -Child -Sample Size 500

    E-Print Network [OSTI]

    Brown, Laura E.

    , and Hailfinder10 Networks. C22_complete_shd_results.tex; 5/08/2005; 16:24; p.2 #12;0 500 1000 1500 x MMHC OR1k=5 GES StructuralHammingDistance Average SHD Results - Child - Sample Size 500 Error Bars = +/- Std TPDA GES StructuralHammingDistance Average SHD Results - Child3 - Sample Size 500 Error Bars = +/- Std

  20. Neutron multiplication error in TRU waste measurements

    SciTech Connect (OSTI)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are more realistic and accurate. To do so, measurements of standards and waste drums were performed with High Efficiency Neutron Counters (HENC) located at Los Alamos National Laboratory (LANL). The data were analyzed for multiplication effects and new estimates of the multiplication error were computed. A concluding section will present alternatives for reducing the number of rejections of TRU waste containers due to neutron multiplication error.

  1. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...

  2. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...

  3. Electric Utility Demand-Side Evaluation Methodologies 

    E-Print Network [OSTI]

    Treadway, N.

    1986-01-01

    UTILITY DEMAND-SIDE EVALUATION METHODOLOGIES* Nat Treadway Public Utility Commission of Texas Austin, Texas ABSTRACT The electric. util ity industry's demand-side management programs can be analyzed ?from various points of view using a standard... cost and certification proceedings. A s~andard benefit-cost methodology analyzes demand-slde management programs from various ~oints of view. The benefit-cost methodology now ln use by several electric utilities and the * The views presented...

  4. Methodology for Validating Building Energy Analysis Simulations

    SciTech Connect (OSTI)

    Judkoff, R.; Wortman, D.; O'Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  5. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    SciTech Connect (OSTI)

    Zanolin, M.; Vitale, S.; Makris, N.

    2010-06-15

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

  6. Development of Nonlinear SSI Time Domain Methodology

    Broader source: Energy.gov [DOE]

    Development of Nonlinear SSI Time Domain Methodology Justin Coleman, P.E. Nuclear Science and Technology Idaho National Laboratory October 22, 2014

  7. Optimal error estimates for corrected trapezoidal rules

    E-Print Network [OSTI]

    Talvila, Erik

    2012-01-01

    Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.

  8. Waste Package Design Methodology Report

    SciTech Connect (OSTI)

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  9. American Journal of Botany 88(6): 10961102. 2001. HABITAT-RELATED ERROR IN ESTIMATING

    E-Print Network [OSTI]

    Wilf, Peter

    for this habitat variation to introduce error into temperature reconstructions, based on field data from a modern proportion of liana species with toothed leaves in lakeside and riverside samples appears to be responsible forests between the proportion of woody dicotyledonous spe- cies with entire-margined leaves in a flora

  10. Errors associated with particulate matter measurements on rural sources: appropriate basis for regulating cotton gins 

    E-Print Network [OSTI]

    Buser, Michael Dean

    2004-09-30

    indicated that current cotton gin emission factors could be over-estimated by about 40%. This over-estimation is a consequence of the relatively large PM associated with cotton gin exhausts. These PM sampling errors are contributing to the misappropriation...

  11. 2002 E.M. Aboulhamid 1 Methodology

    E-Print Network [OSTI]

    Aboulhamid, El Mostapha

    library and a methodology ­ create a cycle-accurate model of software algorithms ­ hardware architecture Types 4-valued logic Bits and Bit Vectors Arbitrary Precision Integers Fixed-point types Structural Libraries Verification library TLM library, etc Methodology-Specific Libraries Master/Slave library, etc

  12. Lateral boundary errors in regional numerical weather

    E-Print Network [OSTI]

    ?umer, Slobodan

    Lateral boundary errors in regional numerical weather prediction models Author: Ana Car Advisor, they describe evolution of atmospher - weather forecast. Every NWP model solves the same system of equations (1: assoc. prof. dr. Nedjeljka Zagar January 5, 2015 Abstract Regional models are used in many national

  13. MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS

    E-Print Network [OSTI]

    Heinemann, Detlev

    commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head

  14. Chinese Remaindering with Errors Oded Goldreich

    E-Print Network [OSTI]

    International Association for Cryptologic Research (IACR)

    Chinese Remaindering with Errors Oded Goldreich Department of Computer Science Weizmann Institute 02139, USA madhu@mit.edu. z Abstract The Chinese Remainder Theorem states that a positive integer m The Chinese Remainder Theorem states that a positive integer m is uniquely specified by its remainder modulo k

  15. Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Ela, E.; Milligan, M.

    2011-10-01

    This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.

  16. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect (OSTI)

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  17. Discussion on common errors in analyzing sea level accelerations, solar trends and global warming

    E-Print Network [OSTI]

    Scafetta, Nicola

    2013-01-01

    Errors in applying regression models and wavelet filters used to analyze geophysical signals are discussed: (1) multidecadal natural oscillations (e.g. the quasi 60-year Atlantic Multidecadal Oscillation (AMO), North Atlantic Oscillation (NAO) and Pacific Decadal Oscillation (PDO)) need to be taken into account for properly quantifying anomalous accelerations in tide gauge records such as in New York City; (2) uncertainties and multicollinearity among climate forcing functions prevent a proper evaluation of the solar contribution to the 20th century global surface temperature warming using overloaded linear regression models during the 1900-2000 period alone; (3) when periodic wavelet filters, which require that a record is pre-processed with a reflection methodology, are improperly applied to decompose non-stationary solar and climatic time series, Gibbs boundary artifacts emerge yielding misleading physical interpretations. By correcting these errors and using optimized regression models that reduce multico...

  18. Plasma dynamics and a significant error of macroscopic averaging

    E-Print Network [OSTI]

    Marek A. Szalek

    2005-05-22

    The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.

  19. 10.1177/1087057105276989Kevorkov and MakarenkovSystematic Errors in High-Throughput Screening Statistical Analysis of Systematic Errors

    E-Print Network [OSTI]

    Makarenkov, Vladimir

    - mentaldatarequiresan efficientautomaticroutinefor theselection of hits. Unfortunately, random and systematic errors can

  20. Verification of unfold error estimates in the UFO code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  1. k=10 GS PC TPDA GES Ave. Bayesian Score Results -Child -Sample Size 500

    E-Print Network [OSTI]

    Brown, Laura E.

    SC k=10 GS PC TPDA GES Bayesian Score (BDeu) Ave. Bayesian Score Results - Child - Sample Size 500 - Sample Size 500 Error Bars = +/- Std.Dev. -130 -120 -110 -100 -90 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 - Child5 - Sample Size 500 Error Bars = +/- Std.Dev. -260 -240 -220 -200 -180 * MMHC OR1 k=5 OR1 k=10 OR1

  2. Practical reporting times for environmental samples

    SciTech Connect (OSTI)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4{degrees}C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions.

  3. Development of a statistically based access delay timeline methodology.

    SciTech Connect (OSTI)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  4. Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Mesurements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 OutreachProductswsicloudwsiclouddenDVA N C E D B L O O D S TAPropaneandAn OverviewCoalAnalysis of Cloud

  5. Detecting Soft Errors in Stencil based Computations

    SciTech Connect (OSTI)

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  6. A generalized optimization methodology for isotope management

    E-Print Network [OSTI]

    Massie, Mark (Mark Edward)

    2010-01-01

    This research, funded by the Department of Energy's Advanced Fuel Cycle Initiative Fellowship, was focused on developing a new approach to studying the nuclear fuel cycle: instead of using the trial and error approach ...

  7. Gross error detection in process data 

    E-Print Network [OSTI]

    Singh, Gurmeet

    1992-01-01

    , 1991), with many optimum properties, seems to have been untapped by chemical engineers. We first review the background of the Tr test, and present relevant properties of the test. IV. A Hotelling's Generalization of Students t Test One of the most...: Chemical Engineering GROSS ERROR DETECTION IN PROCESS DATA A Thesis by GURMEET SINGH Approved as to style and content by: Ralph E. White (Chair of Committee) Michael Nikoloau (Member Richard B. Gri n (Member) R. W. Flummerfelt (Head...

  8. Improving Memory Error Handling Using Linux

    SciTech Connect (OSTI)

    Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-25

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  9. Geologic selection methodology for transportation corridor routing 

    E-Print Network [OSTI]

    Shultz, Karin Wilson

    2002-01-01

    A lack of planning techniques and processes on long, linear, cut and cover-tunneling route transportation systems has resulted because of the advancement of transportation systems into underground corridors. The proposed methodology is tested...

  10. Limitations with Activity Recognition Methodology & Data Sets

    E-Print Network [OSTI]

    Weiss, Gary

    raw AR sensor data, but the methodology used to build and evaluate AR systems. In this paper we focus, not just in metadata associated with the data sets. A key concern of this paper is not just the underlying

  11. A Comparative Study into Architecture-Based Safety Evaluation Methodologies using AADL's Error Annex and Failure Propagation Models

    E-Print Network [OSTI]

    Han, Jun

    and Effects Analysis (FMEA) [25] are used to create evidence that the system fulfils its safety requirements design phase) are used to automatically produce Fault Trees and FMEA tables based on an architecture

  12. Motivation Methodology Conclusions A Fault Tolerance Bisimulation Proof for

    E-Print Network [OSTI]

    Francalanza, Adrian

    Motivation Methodology Conclusions A Fault Tolerance Bisimulation Proof for Consensus Adrian of Somewhere and Elsewhere A Fault Tolerance Bisimulation Proof for Consensus #12;Motivation Methodology Conclusions Outline 1 Motivation 2 Methodology 3 Conclusions Adrian Francalanza, Matthew Hennessy Universities

  13. Examination of radioactive decay methodology in the HASCAL code

    SciTech Connect (OSTI)

    Steffler, R.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Nuclear Engineering; Ryman, J.C.; Gehin, J.C.; Worley, B.A. [Oak Ridge National Lab., TN (United States)

    1998-01-01

    The HASCAL 2.0 code provides dose estimates for nuclear, chemical, and biological facility accident and terrorist weapon strike scenarios. In the analysis of accidents involving radioactive material, an approximate method is used to account for decay during transport. Rather than perform the nuclide decay during the atmospheric transport calculation, the decay is performed a priori and a table look up method is used during the transport of a depositing tracer particle and non depositing (gaseous) tracer particle. In order to investigate the accuracy of this decay methodology two decay models were created using the ORIGEN2 computer program. The first is a HASCAL like model that treats decay and growth of all nuclide explicitly over the time interval specified for atmospheric transport, but does not change the relative mix of depositing and non-depositing nuclides due to deposition to the ground, nor does it treat resuspension. The second model explicitly includes resuspension as well as separate decay of the nuclides in the atmosphere and on the ground at each deposition time step. For simplicity, both of these models uses a one-dimensional layer model for the atmospheric transport. An additional investigation was performed to determine the accuracy of the HASCAL like model in separately following Cs-137 and I-131. The results from this study show that the HASCAL decay model compares closely with the more rigorous model with the computed doses are generally within one percent (maximum error of 7 percent) over 48 hours following the release. The models showed no difference for Cs-137 and a maximum error of 2.5 percent for I-131 over the 96 hours following release.

  14. Decoherence and dephasing errors caused by the dc Stark effect...

    Office of Scientific and Technical Information (OSTI)

    Decoherence and dephasing errors caused by the dc Stark effect in rapid ion transport Citation Details In-Document Search Title: Decoherence and dephasing errors caused by the dc...

  15. Error Reduction for Weigh-In-Motion

    SciTech Connect (OSTI)

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  16. Forward Error Correction and Functional Programming

    E-Print Network [OSTI]

    Bull, Tristan Michael

    2011-04-25

    .1 Annapolis Micro Wildstar 5 DDR2 DRAM Interface . . . . . . . . 50 6.2 Dual-Port DRAM Wrapper . . . . . . . . . . . . . . . . . . . . . 52 6.3 Kansas Lava DRAM Interface . . . . . . . . . . . . . . . . . . . . 55 7 Conclusion 58 7.1 Future Work... codewords. We ran the simulation using input data with energy per bit to noise power spectral density ratios (Eb=N0) of 3dB to 6dB in 0.5dB increments. For each Eb=N0 value, we ran the simulation until at least 25,000 bit errors were recorded. Results...

  17. Unitary-process discrimination with error margin

    E-Print Network [OSTI]

    T. Hashimoto; A. Hayashi; M. Hayashi; M. Horibe

    2010-06-10

    We investigate a discrimination scheme between unitary processes. By introducing a margin for the probability of erroneous guess, this scheme interpolates the two standard discrimination schemes: minimum-error and unambiguous discrimination. We present solutions for two cases. One is the case of two unitary processes with general prior probabilities. The other is the case with a group symmetry: the processes comprise a projective representation of a finite group. In the latter case, we found that unambiguous discrimination is a kind of "all or nothing": the maximum success probability is either 0 or 1. We also closely analyze how entanglement with an auxiliary system improves discrimination performance.

  18. On the Error in QR Integration

    E-Print Network [OSTI]

    Dieci, Luca; Van Vleck, Erik

    2008-03-07

    Society for Industrial and Applied Mathematics Vol. 46, No. 3, pp. 1166–1189 ON THE ERROR IN QR INTEGRATION? LUCA DIECI† AND ERIK S. VAN VLECK‡ Abstract. An important change of variables for a linear time varying system x? = A(t)x, t ? 0, is that induced...(X) is the matrix comprising the diagonal part of X, the rest being all 0’s; upp(X) is the matrix comprising the upper triangular part of X, the rest being all 0’s; and low(X) is the matrix comprising the strictly lower triangular part of X, the rest being all 0’s...

  19. Modeling of Diesel Exhaust Systems: A methodology to better simulate...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Discussed...

  20. New Methodologies for Analysis of Premixed Charge Compression...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    New Methodologies for Analysis of Premixed Charge Compression Ignition Engines New Methodologies for Analysis of Premixed Charge Compression Ignition Engines Presentation given at...

  1. Biopower Report Presents Methodology for Assessing the Value...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of Co-Firing...

  2. Methodology for Estimating Reductions of GHG Emissions from Mosaic...

    Open Energy Info (EERE)

    Methodology for Estimating Reductions of GHG Emissions from Mosaic Deforestation Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Methodology for Estimating Reductions of...

  3. Hydrogen Program Goal-Setting Methodologies Report to Congress...

    Broader source: Energy.gov (indexed) [DOE]

    to Congress, published in August 2006, focuses on the methodologies used by the DOE Hydrogen Program for goal-setting. Hydrogen Program Goal-Setting Methodologies Report to...

  4. Application of Random Vibration Theory Methodology for Seismic...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction...

  5. Barr Engineering Statement of Methodology Rosemount Wind Turbine...

    Energy Savers [EERE]

    Barr Engineering Statement of Methodology Rosemount Wind Turbine Simulations by Truescape Visual Reality, DOEEA-1791 (May 2010) Barr Engineering Statement of Methodology Rosemount...

  6. Methodology for Carbon Accounting of Grouped Mosaic and Landscape...

    Open Energy Info (EERE)

    Mosaic and Landscape-scale REDD Projects Screenshot References: Methodology for Carbon Accounting of Grouped Mosaic and Landscape-scale REDD Projects1 "This methodology sets...

  7. Validation of Hydrogen Exchange Methodology on Molecular Sieves...

    Office of Environmental Management (EM)

    Validation of Hydrogen Exchange Methodology on Molecular Sieves for Tritium Removal from Contaminated Water Validation of Hydrogen Exchange Methodology on Molecular Sieves for...

  8. Rain sampling device

    DOE Patents [OSTI]

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  9. Rain sampling device

    DOE Patents [OSTI]

    Nelson, Danny A. (Richland, WA); Tomich, Stanley D. (Richland, WA); Glover, Donald W. (Prosser, WA); Allen, Errol V. (Benton City, WA); Hales, Jeremy M. (Kennewick, WA); Dana, Marshall T. (Richland, WA)

    1991-01-01

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  10. Bolstered Error Estimation Ulisses Braga-Neto a,c

    E-Print Network [OSTI]

    Braga-Neto, Ulisses

    the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian

  11. A Taxonomy of Number Entry Error Sarah Wiseman

    E-Print Network [OSTI]

    Subramanian, Sriram

    A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how

  12. Critical infrastructure systems of systems assessment methodology.

    SciTech Connect (OSTI)

    Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

    2006-10-01

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

  13. A method for the quantification of model form error associated with physical systems.

    SciTech Connect (OSTI)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  14. A New Project Execution Methodology; Integrating Project Management Principles with Quality Project Execution Methodologies

    E-Print Network [OSTI]

    Schriner, Jesse J.

    2008-07-25

    Approach ........................................................................................3 The ITIL Approach ..................................................................................................5 Quality Project Methodologies Summary.... 2006. Six Sigma for IT Management. Van Haren Publishing. The main purpose of this book is to both introduce Six Sigma and Information Technology Infrastructure Library (ITIL) and then integrate the two methodologies for application...

  15. Systematic Comparison of Operating Reserve Methodologies: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Krad, I.; Ela, E.

    2014-04-01

    Operating reserve requirements are a key component of modern power systems, and they contribute to maintaining reliable operations with minimum economic impact. No universal method exists for determining reserve requirements, thus there is a need for a thorough study and performance comparison of the different existing methodologies. Increasing penetrations of variable generation (VG) on electric power systems are posed to increase system uncertainty and variability, thus the need for additional reserve also increases. This paper presents background information on operating reserve and its relationship to VG. A consistent comparison of three methodologies to calculate regulating and flexibility reserve in systems with VG is performed.

  16. Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics 

    E-Print Network [OSTI]

    Zollanvari, Amin

    2012-02-14

    , Aniruddha Datta Guy L. Curry Head of Department, Costas N. Georghiades December 2010 Major Subject: Electrical Engineering iii ABSTRACT Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133 x LIST OF TABLES TABLE Page I Minimum sample size, n, (n0 = n1 = n) for desired (n;0:5) in univariate case. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 II Genes selected using the validity-goodness model selection...

  17. Integrating human related errors with technical errors to determine causes behind offshore accidents

    E-Print Network [OSTI]

    Aamodt, Agnar

    errors were embedded as an integral part of the oil well drilling opera- tion. To reduce the number assessment of the failure. The method is based on a knowledge model of the oil-well drilling process. All of non-productive time (NPT) during oil-well drilling. NPT exhibits a much lower declining trend than

  18. In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors

    E-Print Network [OSTI]

    Przasnyski, Zbigniew; Seal, Kala Chand

    2011-01-01

    Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.

  19. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect (OSTI)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  20. Methodology Strati ed sampling design of 400 households in four Phoenix neighborhoods

    E-Print Network [OSTI]

    Hall, Sharon J.

    Endorsement of the New Ecological Paradigm (NEP): A revised NEP Scale. Journal of Social Issues 56(3): 425

  1. DIESEL AEROSOL SAMPLING METHODOLOGY -CRC E-43 TECHNICAL SUMMARY AND CONCLUSIONS

    E-Print Network [OSTI]

    Minnesota, University of

    -the-road trucking industry due to their power, durability and efficiency. However, like other sources of combustion pollution, Diesels emit exhaust gases and particulates that are subject to regulation by State and Federal from internal combustion engines have traditionally been regulated solely on the basis of total

  2. k=10 GS PC TPDA GES Average SHD Results -Child -Sample Size 500

    E-Print Network [OSTI]

    Brown, Laura E.

    TPDA GES Structural Hamming Distance Average SHD Results - Child - Sample Size 500 Error Bars = +/- Std GS PC TPDA GES Structural Hamming Distance Average SHD Results - Child3 - Sample Size 500 Error Bars = +/- Std.Dev. 0 100 200 300 400 500 600 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC

  3. Sample Proficiency Test exercise

    SciTech Connect (OSTI)

    Alcaraz, A; Gregg, H; Koester, C

    2006-02-05

    The current format of the OPCW proficiency tests has multiple sets of 2 samples sent to an analysis laboratory. In each sample set, one is identified as a sample, the other as a blank. This method of conducting proficiency tests differs from how an OPCW designated laboratory would receive authentic samples (a set of three containers, each not identified, consisting of the authentic sample, a control sample, and a blank sample). This exercise was designed to test the reporting if the proficiency tests were to be conducted. As such, this is not an official OPCW proficiency test, and the attached report is one method by which LLNL might report their analyses under a more realistic testing scheme. Therefore, the title on the report ''Report of the Umpteenth Official OPCW Proficiency Test'' is meaningless, and provides a bit of whimsy for the analyses and readers of the report.

  4. Quantum Error Correction with magnetic molecules

    E-Print Network [OSTI]

    José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas

    2014-08-22

    Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.

  5. Sample Environments at Sector 30

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Designs Two standard sample holder designs are below. Copper sample holder from ARS. ARS sample holde diagram picture Aluminum sample holder - custom design Al design Al pic...

  6. Turbulent mixing in ducts, theory and experiment application to aerosol single point sampling 

    E-Print Network [OSTI]

    Langari, Abdolreza

    1997-01-01

    The Environmental Protection Agency (EPA) has announced rules for continuous emissions monitoring (CEM) of stacks and ducts in nuclear facilities. EPA has recently approved use of Alternative Reference Methodologies (ARM) for air sampling in nuclear...

  7. Methodology Water Harvesting Measurements with Biomimetic

    E-Print Network [OSTI]

    Barthelat, Francois

    Methodology Water Harvesting Measurements with Biomimetic Surfaces Zi Jun Wang and Prof. Anne parameters that affect the water harvesting efficiencies of different surfaces · Optimize the experimental Objectives Water is one of the most essential natural resources. The easy accessibility of water

  8. Response Surface Methodology Its application to automotive

    E-Print Network [OSTI]

    Awtar, Shorya

    . Introduction & Basis of RSM 1. History of RSM 2. What's RSM 3. Why is RSM 4. Least square method 5. Design Of Experiment (DOE) II. Its application to automotive suspension designs 1. Size optimization for beam stiffens and Basis of Response surface Methodology (RSM) #12;Toyota Central R&D Labs., Inc 4 History of RSM 1951 Box

  9. Methodology for Prototyping Increased Levels of Automation

    E-Print Network [OSTI]

    Valasek, John

    Methodology for Prototyping Increased Levels of Automation for Spacecraft Rendezvous Functions of automation than previous NASA vehicles, due to program requirements for automation, including Automated Ren authority between humans and computers (i.e. automation) as a prime driver for cost, safety, and mission

  10. Microbiomic Signatures of Psoriasis: Feasibility and Methodology

    E-Print Network [OSTI]

    Statnikov, Alexander

    Microbiomic Signatures of Psoriasis: Feasibility and Methodology Comparison Alexander Statnikov1 York. Psoriasis is a common chronic inflammatory disease of the skin. We sought to use bacterial of psoriasis result in significant accuracy ranging from 0.75 to 0.89 AUC, depending on the classification task

  11. GROUNDWATER ASSESSMENT METHODOLOGY C. P. Kumar

    E-Print Network [OSTI]

    Kumar, C.P.

    1 GROUNDWATER ASSESSMENT METHODOLOGY C. P. Kumar Scientist `F', National Institute of Hydrology is groundwater resources. Due to uneven distribution of rainfall both in time and space, the surface water on development of groundwater resources. The simultaneous development of groundwater, specially through dug wells

  12. Methodological Review Data integration and genomic medicine

    E-Print Network [OSTI]

    Petropoulos, Michalis

    Methodological Review Data integration and genomic medicine Brenton Louie a,*, Peter Mork b 2006 Abstract Genomic medicine aims to revolutionize health care by applying our growing understanding the opportunities of genomic medicine as well as identify the informatics challenges in this domain. We also review

  13. Methodology to remediate a mixed waste site

    SciTech Connect (OSTI)

    Berry, J.B.

    1994-08-01

    In response to the need for a comprehensive and consistent approach to the complex issue of mixed waste management, a generalized methodology for remediation of a mixed waste site has been developed. The methodology is based on requirements set forth in the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and the Resource Conservation and Recovery Act (RCRA) and incorporates ``lessons learned`` from process design, remediation methodologies, and remediation projects. The methodology is applied to the treatment of 32,000 drums of mixed waste sludge at the Oak Ridge K-25 Site. Process technology options are developed and evaluated, first with regard to meeting system requirements and then with regard to CERCLA performance criteria. The following process technology options are investigated: (1) no action, (2) separation of hazardous and radioactive species, (3) dewatering, (4) drying, and (5) solidification/stabilization. The first two options were eliminated from detailed consideration because they did not meet the system requirements. A quantitative evaluation clearly showed that, based on system constraints and project objectives, either dewatering or drying the mixed waste sludge was superior to the solidification/stabilization process option. The ultimate choice between the drying and the dewatering options will be made on the basis of a technical evaluation of the relative merits of proposals submitted by potential subcontractors.

  14. Theoretical analysis of reflected ray error from surface slope error and their application to the solar concentrated collector

    E-Print Network [OSTI]

    Huang, Weidong

    2011-01-01

    Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.

  15. Scheme for precise correction of orbit variation caused by dipole error field of insertion device

    SciTech Connect (OSTI)

    Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.

    2005-05-15

    We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.

  16. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, Johann (Ann Arbor, MI); Koren, Yoram (Ann Arbor, MI)

    1993-08-24

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the "confidence" of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  17. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, J.; Koren, Y.

    1993-08-24

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the confidence'' of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  18. A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN

    SciTech Connect (OSTI)

    Shine, G.

    2014-11-25

    The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple mean of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.

  19. State discrimination with error margin and its locality

    E-Print Network [OSTI]

    A. Hayashi; T. Hashimoto; M. Horibe

    2008-07-10

    There are two common settings in a quantum-state discrimination problem. One is minimum-error discrimination where a wrong guess (error) is allowed and the discrimination success probability is maximized. The other is unambiguous discrimination where errors are not allowed but the inconclusive result "I don't know" is possible. We investigate discrimination problem with a finite margin imposed on the error probability. The two common settings correspond to the error margins 1 and 0. For arbitrary error margin, we determine the optimal discrimination probability for two pure states with equal occurrence probabilities. We also consider the case where the states to be discriminated are multipartite, and show that the optimal discrimination probability can be achieved by local operations and classical communication.

  20. Error models in quantum computation: an application of model selection

    E-Print Network [OSTI]

    Lucia Schwarz; Steven van Enk

    2013-09-04

    Threshold theorems for fault-tolerant quantum computing assume that errors are of certain types. But how would one detect whether errors of the "wrong" type occur in one's experiment, especially if one does not even know what type of error to look for? The problem is that for many qubits a full state description is impossible to analyze, and a full process description is even more impossible to analyze. As a result, one simply cannot detect all types of errors. Here we show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. The example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size.

  1. Sampling system and method

    DOE Patents [OSTI]

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  2. IDENTIFICATION Your Sample Box

    E-Print Network [OSTI]

    Liskiewicz, Maciej

    to Virginia Tech Soil Testing Lab, 145 Smyth Hall (MC 0465), 185 Ag Quad Ln, Blacksburg VA 24061, in sturdy, K, Ca, Mg, Zn, Mn, Cu, Fe, B, and soluble salts) NoCharge $16.00 Organic Matter $4.00 $6.00 Fax with soil sample and form; make check or money order payable to "Treasurer, Virginia Tech." COST PER SAMPLE

  3. Free Standing Soil Sample

    E-Print Network [OSTI]

    Stuart, Steven J.

    Free Standing Soil Sample Kiosks Clemson University Cooperative Extension Service Reportto of Richland County, Jackie Kopack Jordan has partnered with local garden centers to provide free standing soil sample collections sites. The free standing kiosks are located at three local garden centers. Woodley

  4. A technique for human error analysis (ATHEANA)

    SciTech Connect (OSTI)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  5. RESEARCH ARTICLE Minimization of divergence error in volumetric velocity

    E-Print Network [OSTI]

    Marusic, Ivan

    RESEARCH ARTICLE Minimization of divergence error in volumetric velocity measurements Volumetric velocity measurements taken in incompressible fluids are typically hindered by a nonzero

  6. Investigating surety methodologies for cognitive systems.

    SciTech Connect (OSTI)

    Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); Peercy, David Eugene; Mills, Kristy; Caldera, Eva

    2006-11-01

    Advances in cognitive science provide a foundation for new tools that promise to advance human capabilities with significant positive impacts. As with any new technology breakthrough, associated technical and non-technical risks are involved. Sandia has mitigated both technical and non-technical risks by applying advanced surety methodologies in such areas as nuclear weapons, nuclear reactor safety, nuclear materials transport, and energy systems. In order to apply surety to the development of cognitive systems, we must understand the concepts and principles that characterize the certainty of a system's operation as well as the risk areas of cognitive sciences. This SAND report documents a preliminary spectrum of risks involved with cognitive sciences, and identifies some surety methodologies that can be applied to potentially mitigate such risks. Some potential areas for further study are recommended. In particular, a recommendation is made to develop a cognitive systems epistemology framework for more detailed study of these risk areas and applications of surety methods and techniques.

  7. eGallon Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Delicious Rank EERE: Alternative FuelsofProgram:Y-12 Beta-3 Racetracks Y-12 Beta-3 RacetrackseGallon Methodology

  8. Transuranic waste characterization sampling and analysis plan

    SciTech Connect (OSTI)

    NONE

    1994-12-31

    Los Alamos National Laboratory (the Laboratory) is located approximately 25 miles northwest of Santa Fe, New Mexico, situated on the Pajarito Plateau. Technical Area 54 (TA-54), one of the Laboratory`s many technical areas, is a radioactive and hazardous waste management and disposal area located within the Laboratory`s boundaries. The purpose of this transuranic waste characterization, sampling, and analysis plan (CSAP) is to provide a methodology for identifying, characterizing, and sampling approximately 25,000 containers of transuranic waste stored at Pads 1, 2, and 4, Dome 48, and the Fiberglass Reinforced Plywood Box Dome at TA-54, Area G, of the Laboratory. Transuranic waste currently stored at Area G was generated primarily from research and development activities, processing and recovery operations, and decontamination and decommissioning projects. This document was created to facilitate compliance with several regulatory requirements and program drivers that are relevant to waste management at the Laboratory, including concerns of the New Mexico Environment Department.

  9. Water Sample Concentrator

    ScienceCinema (OSTI)

    Idaho National Laboratory

    2010-01-08

    Automated portable device that concentrates and packages a sample of suspected contaminated water for safe, efficient transport to a qualified analytical laboratory. This technology will help safeguard against pathogen contamination or chemical and biolog

  10. Liquid scintillator sampling calorimetry 

    E-Print Network [OSTI]

    Dudgeon, R. Greg

    1994-01-01

    This research was supported by the Department of Energy to investigate a new sampling calorimeter technology for the high intensity regions of the Superconducting Supercollider. The technology involved using liquid scintillator filled glass tubes...

  11. Sample Changes and Issues

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    EIA-914 Survey and HPDI. Figure 2 shows how this could change apparent production. The blue line shows the reported sample production as it would normally be reported under the...

  12. Dissolution actuated sample container

    DOE Patents [OSTI]

    Nance, Thomas A.; McCoy, Frank T.

    2013-03-26

    A sample collection vial and process of using a vial is provided. The sample collection vial has an opening secured by a dissolvable plug. When dissolved, liquids may enter into the interior of the collection vial passing along one or more edges of a dissolvable blocking member. As the blocking member is dissolved, a spring actuated closure is directed towards the opening of the vial which, when engaged, secures the vial contents against loss or contamination.

  13. SAMPLING AND ANALYSIS PROTOCOLS

    SciTech Connect (OSTI)

    Jannik, T; P Fledderman, P

    2007-02-09

    Radiological sampling and analyses are performed to collect data for a variety of specific reasons covering a wide range of projects. These activities include: Effluent monitoring; Environmental surveillance; Emergency response; Routine ambient monitoring; Background assessments; Nuclear license termination; Remediation; Deactivation and decommissioning (D&D); and Waste management. In this chapter, effluent monitoring and environmental surveillance programs at nuclear operating facilities and radiological sampling and analysis plans for remediation and D&D activities will be discussed.

  14. Liquid sampling system

    DOE Patents [OSTI]

    Larson, L.L.

    1984-09-17

    A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed. 5 figs.

  15. Liquid sampling system

    DOE Patents [OSTI]

    Larson, Loren L. (Idaho Falls, ID)

    1987-01-01

    A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed.

  16. Mutual information, bit error rate and security in Wójcik's scheme

    E-Print Network [OSTI]

    Zhanjun Zhang

    2004-02-21

    In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.

  17. Kernel Regression with Correlated Errors K. De Brabanter

    E-Print Network [OSTI]

    Kernel Regression with Correlated Errors K. De Brabanter , J. De Brabanter , , J.A.K. Suykens B: It is a well-known problem that obtaining a correct bandwidth in nonparametric regression is difficult support vector machines for regression. Keywords: nonparametric regression, correlated errors, short

  18. Ridge Regression Estimation Approach to Measurement Error Model

    E-Print Network [OSTI]

    Shalabh

    Ridge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton of the regression parameters is ill conditioned. We consider the Hoerl and Kennard type (1970) ridge regression (RR) modifications of the five quasi- empirical Bayes estimators of the regression parameters of a measurement error

  19. Solving LWE problem with bounded errors in polynomial time

    E-Print Network [OSTI]

    International Association for Cryptologic Research (IACR)

    Solving LWE problem with bounded errors in polynomial time Jintai Ding1,2 Southern Chinese call the learning with bounded errors (LWBE) problems, we can solve it with complexity O(nD ). Keywords, this problem corresponds to the learning parity with noise (LPN) problem. There are several ways to solve

  20. ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar*

    E-Print Network [OSTI]

    Potkonjak, Miodrag

    ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar* , Sasha Slijepcevic , Miodrag is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined in order applications, including multi- modal sensor fusion, is to ensure that all of the techniques and tools are error

  1. Fault-Tolerant Error Correction with the Gauge Color Code

    E-Print Network [OSTI]

    Benjamin J. Brown; Naomi H. Nickerson; Dan E. Browne

    2015-08-03

    The gauge color code is a quantum error-correcting code with local syndrome measurements that, remarkably, admits a universal transversal gate set without the need for resource-intensive magic state distillation. A result of recent interest, proposed by Bomb\\'{i}n, shows that the subsystem structure of the gauge color code admits an error-correction protocol that achieves tolerance to noisy measurements without the need for repeated measurements, so called single-shot error correction. Here, we demonstrate the promise of single-shot error correction by designing a two-part decoder and investigate its performance. We simulate fault-tolerant error correction with the gauge color code by repeatedly applying our proposed error-correction protocol to deal with errors that occur continuously to the underlying physical qubits of the code over the duration that quantum information is stored. We estimate a sustainable error rate, i.e. the threshold for the long time limit, of $ \\sim 0.31\\%$ for a phenomenological noise model using a simple decoding algorithm.

  2. Error detection through consistency checking Peng Gong* Lan Mu#

    E-Print Network [OSTI]

    Silver, Whendee

    Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers

  3. Analysis of Probabilistic Error Checking Procedures on Storage Systems

    E-Print Network [OSTI]

    Chen, Ing-Ray

    Analysis of Probabilistic Error Checking Procedures on Storage Systems ING-RAY CHEN AND I.-LING YEN Email: irchen@iie.ncku.edu.tw Conventionally, error checking on storage systems is performed on-the-fly (with probability 1) as the storage system is being accessed in order to improve the reliability

  4. ADJOINT AND DEFECT ERROR BOUNDING AND CORRECTION FOR FUNCTIONAL ESTIMATES

    E-Print Network [OSTI]

    Pierce, Niles A.

    decades. Integral functionals also arise in other aerospace areas such as the calculation of radar cross functional that results from residual errors in approximating the solution to the partial differential to handle flows with shocks; numerical experiments confirm 4th order error estimates for a pressure integral

  5. Kinematic Error Correction for Minimally Invasive Surgical Robots

    E-Print Network [OSTI]

    in two likely sources of kinematic error: port displacement and instrument shaft flexion. For a quasi. To reach the surgical site near the chest wall, the instrument shaft applies significant torque to the port, and the instrument shaft to bend. These kinematic errors impair positioning of the robot and cause deviations from

  6. Methodology for Fine Art formulation applied to investment casting moulds 

    E-Print Network [OSTI]

    Ibrahim, Ahmad Rashdi Yan

    This research concerns the development of a methodology for formulation in Fine Art, Design and Craft practice. The methodology is applied to the choosing of formulations for bronze and glass investments casting moulds ...

  7. Proceedings on the Workshop on METHODOLOGIES FOR INTERNATIONAL

    E-Print Network [OSTI]

    Use and Integrated Statistics Division, Energy Information division (EIA), US Department of Energy1 Proceedings on the Workshop on METHODOLOGIES FOR INTERNATIONAL COMPARISONS OF INDUSTRIAL ENERGY;Proceedings on the Workshop on METHODOLOGIES FOR INTERNATIONAL COMPARISONS OF INDUSTRIAL ENERGY EFFICIENCY

  8. Grid-scale Fluctuations and Forecast Error in Wind Power

    E-Print Network [OSTI]

    G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi

    2015-03-29

    The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).

  9. Grid-scale Fluctuations and Forecast Error in Wind Power

    E-Print Network [OSTI]

    Bel, G; Toots, M; Bandi, M M

    2015-01-01

    The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).

  10. Using error correction to determine the noise model

    E-Print Network [OSTI]

    M. Laforest; D. Simon; J. -C. Boileau; J. Baugh; M. Ditty; R. Laflamme

    2007-01-25

    Quantum error correcting codes have been shown to have the ability of making quantum information resilient against noise. Here we show that we can use quantum error correcting codes as diagnostics to characterise noise. The experiment is based on a three-bit quantum error correcting code carried out on a three-qubit nuclear magnetic resonance (NMR) quantum information processor. Utilizing both engineered and natural noise, the degree of correlations present in the noise affecting a two-qubit subsystem was determined. We measured a correlation factor of c=0.5+/-0.2 using the error correction protocol, and c=0.3+/-0.2 using a standard NMR technique based on coherence pathway selection. Although the error correction method demands precise control, the results demonstrate that the required precision is achievable in the liquid-state NMR setting.

  11. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    E-Print Network [OSTI]

    Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq

    2010-01-01

    An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...

  12. Waste Package Component Design Methodology Report

    SciTech Connect (OSTI)

    D.C. Mecham

    2004-07-12

    This Executive Summary provides an overview of the methodology being used by the Yucca Mountain Project (YMP) to design waste packages and ancillary components. This summary information is intended for readers with general interest, but also provides technical readers a general framework surrounding a variety of technical details provided in the main body of the report. The purpose of this report is to document and ensure appropriate design methods are used in the design of waste packages and ancillary components (the drip shields and emplacement pallets). The methodology includes identification of necessary design inputs, justification of design assumptions, and use of appropriate analysis methods, and computational tools. This design work is subject to ''Quality Assurance Requirements and Description''. The document is primarily intended for internal use and technical guidance for a variety of design activities. It is recognized that a wide audience including project management, the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission, and others are interested to various levels of detail in the design methods and therefore covers a wide range of topics at varying levels of detail. Due to the preliminary nature of the design, readers can expect to encounter varied levels of detail in the body of the report. It is expected that technical information used as input to design documents will be verified and taken from the latest versions of reference sources given herein. This revision of the methodology report has evolved with changes in the waste package, drip shield, and emplacement pallet designs over many years and may be further revised as the design is finalized. Different components and analyses are at different stages of development. Some parts of the report are detailed, while other less detailed parts are likely to undergo further refinement. The design methodology is intended to provide designs that satisfy the safety and operational requirements of the YMP. Four waste package configurations have been selected to illustrate the application of the methodology during the licensing process. These four configurations are the 21-pressurized water reactor absorber plate waste package (21-PWRAP), the 44-boiling water reactor waste package (44-BWR), the 5 defense high-level radioactive waste (HLW) DOE spent nuclear fuel (SNF) codisposal short waste package (5-DHLWDOE SNF Short), and the naval canistered SNF long waste package (Naval SNF Long). Design work for the other six waste packages will be completed at a later date using the same design methodology. These include the 24-boiling water reactor waste package (24-BWR), the 21-pressurized water reactor control rod waste package (21-PWRCR), the 12-pressurized water reactor waste package (12-PWR), the 5 defense HLW DOE SNF codisposal long waste package (5-DHLWDOE SNF Long), the 2 defense HLW DOE SNF codisposal waste package (2-MC012-DHLW), and the naval canistered SNF short waste package (Naval SNF Short). This report is only part of the complete design description. Other reports related to the design include the design reports, the waste package system description documents, manufacturing specifications, and numerous documents for the many detailed calculations. The relationships between this report and other design documents are shown in Figure 1.

  13. Quantum rejection sampling

    E-Print Network [OSTI]

    Maris Ozols; Martin Roetteler; Jérémie Roland

    2011-12-13

    Rejection sampling is a well-known method to sample from a target distribution, given the ability to sample from a given distribution. The method has been first formalized by von Neumann (1951) and has many applications in classical computing. We define a quantum analogue of rejection sampling: given a black box producing a coherent superposition of (possibly unknown) quantum states with some amplitudes, the problem is to prepare a coherent superposition of the same states, albeit with different target amplitudes. The main result of this paper is a tight characterization of the query complexity of this quantum state generation problem. We exhibit an algorithm, which we call quantum rejection sampling, and analyze its cost using semidefinite programming. Our proof of a matching lower bound is based on the automorphism principle which allows to symmetrize any algorithm over the automorphism group of the problem. Our main technical innovation is an extension of the automorphism principle to continuous groups that arise for quantum state generation problems where the oracle encodes unknown quantum states, instead of just classical data. Furthermore, we illustrate how quantum rejection sampling may be used as a primitive in designing quantum algorithms, by providing three different applications. We first show that it was implicitly used in the quantum algorithm for linear systems of equations by Harrow, Hassidim and Lloyd. Secondly, we show that it can be used to speed up the main step in the quantum Metropolis sampling algorithm by Temme et al.. Finally, we derive a new quantum algorithm for the hidden shift problem of an arbitrary Boolean function and relate its query complexity to "water-filling" of the Fourier spectrum.

  14. ISHED1: Applying the LEM Methodology to Heat Exchanger Design

    E-Print Network [OSTI]

    Michalski, Ryszard S.

    ISHED1: Applying the LEM Methodology to Heat Exchanger Design Kenneth A. Kaufman Ryszard S. Michalski MLI 00-2 #12;2 ISHED1: APPLYING THE LEM METHODOLOGY TO HEAT EXCHANGER DESIGN Kenneth A. Kaufman-2 January 2000 #12;ISHED1: APPLYING THE LEM METHODOLOGY TO HEAT EXCHANGER DESIGN Abstract Evolutionary

  15. Methodology for Monthly Crude Oil Production Estimates

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration wouldMass map shines light on dark matterEnergy Innovation PortalPrinceton---Methodology|

  16. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    SciTech Connect (OSTI)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  17. Fluid sampling system

    DOE Patents [OSTI]

    Houck, Edward D. (Idaho Falls, ID)

    1994-01-01

    An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank.

  18. Fluid sampling system

    DOE Patents [OSTI]

    Houck, E.D.

    1994-10-11

    An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to be decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank. 4 figs.

  19. Adaptive Non-Linear Sampling Method for Accurate Flow Size Measurement

    E-Print Network [OSTI]

    Chen, Yan

    is utilized when the counter value is large, while a larger sampling rate is employed for a smaller counter errors for the estimation of small-size flows, which is important for many applications like Worm esti- mation. Instead of statically pre-configuring the sampling rate, ANLS dynamically adjusts

  20. Viscous sludge sample collector

    DOE Patents [OSTI]

    Beitel, George A [Richland, WA

    1983-01-01

    A vertical core sample collection system for viscous sludge. A sample tube's upper end has a flange and is attached to a piston. The tube and piston are located in the upper end of a bore in a housing. The bore's lower end leads outside the housing and has an inwardly extending rim. Compressed gas, from a storage cylinder, is quickly introduced into the bore's upper end to rapidly accelerate the piston and tube down the bore. The lower end of the tube has a high sludge entering velocity to obtain a full-length sludge sample without disturbing strata detail. The tube's downward motion is stopped when its upper end flange impacts against the bore's lower end inwardly extending rim.

  1. METHODOLOGY FOR PASSIVE ANALYSIS OF A UNIVERSITY INTERNET LINK / TALK 13 Methodology for Passive Analysis of a University

    E-Print Network [OSTI]

    California at San Diego, University of

    METHODOLOGY FOR PASSIVE ANALYSIS OF A UNIVERSITY INTERNET LINK / TALK 13 Methodology for Passive # Abstract--- Passive monitoring of Internet links can e#­ ciently provide valuable data on a wide variety and research questions using our passive measurement methodology. Keywords--- Network Performance Monitoring

  2. DIGITAL TECHNOLOGY BUSINESS CASE METHODOLOGY GUIDE & WORKBOOK

    SciTech Connect (OSTI)

    Thomas, Ken; Lawrie, Sean; Hart, Adam; Vlahoplus, Chris

    2014-09-01

    Performance advantages of the new digital technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. This Business Case Methodology approaches building a business case for a particular technology or suite of technologies by detailing how they impact an operator in one or more of the three following areas: Labor Costs, Non-Labor Costs, and Key Performance Indicators (KPIs). Key to those impacts will be identifying where the savings are “harvestable,” meaning they result in an actual reduction in headcount and/or cost. The report consists of a Digital Technology Business Case Methodology Guide and an accompanying spreadsheet workbook that will enable the user to develop a business case.

  3. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect (OSTI)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  4. Balancing aggregation and smoothing errors in inverse models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Turner, A. J.; Jacob, D. J.

    2015-01-13

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  5. Balancing aggregation and smoothing errors in inverse models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  6. Measuring worst-case errors in a robot workcell

    SciTech Connect (OSTI)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  7. Wind Power Forecasting Error Distributions: An International Comparison; Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.

    2012-09-01

    Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

  8. A complete Randomized Benchmarking Protocol accounting for Leakage Errors

    E-Print Network [OSTI]

    T. Chasseur; F. K. Wilhelm

    2015-07-09

    Randomized Benchmarking allows to efficiently and scalably characterize the average error of an unitary 2-design such as the Clifford group $\\mathcal{C}$ on a physical candidate for quantum computation, as long as there are no non-computational leakage levels in the system. We investigate the effect of leakage errors on Randomized Benchmarking induced from an additional level per physical qubit and provide a modified protocol that allows to derive reliable estimates for the error per gate in their presence. We assess the variance of the sequence fidelity corresponding to the number of random sequences needed for valid fidelity estimation. Our protocol allows for gate dependent error channels without being restricted to perturbations. We show that our protocol is compatible with Interleaved Randomized Benchmarking and expand to benchmarking of arbitrary gates. This setting is relevant for superconducting transmon qubits, among other systems.

  9. Honest Confidence Intervals for the Error Variance in Stepwise Regression

    E-Print Network [OSTI]

    Stine, Robert A.

    Honest Confidence Intervals for the Error Variance in Stepwise Regression Dean P. Foster and Robert alternatives are used. These simpler algorithms (e.g., forward or backward stepwise regression) obtain

  10. Servo control booster system for minimizing following error

    DOE Patents [OSTI]

    Wise, William L. (Mountain View, CA)

    1985-01-01

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  11. Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola

    E-Print Network [OSTI]

    Oregon, University of

    of the pyranometer to briefly shade the pyranometer once a minute. Direct hori- zontal irradiance is calculated used in programs evaluating the performance of photovoltaic systems, and systematic errors in the data

  12. Error estimation and adaptive mesh refinement for aerodynamic flows

    E-Print Network [OSTI]

    Hartmann, Ralf

    Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann1 and Paul Houston, 38108 Braunschweig, Germany Ralf.Hartmann@dlr.de 2 School of Mathematical Sciences University

  13. MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

    E-Print Network [OSTI]

    Hartmann, Ralf

    MULTI­TARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN of Scientific Computing, TU Braunschweig, Germany (Ralf.Hartmann@dlr.de). 1 #12; 2 R. HARTMANN

  14. Error estimation and adaptive mesh refinement for aerodynamic flows

    E-Print Network [OSTI]

    Hartmann, Ralf

    Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held), Lilien- thalplatz 7, 38108 Braunschweig, Germany, e-mail: Ralf.Hartmann@dlr.de 1 #12;2 Ralf Hartmann

  15. MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

    E-Print Network [OSTI]

    Hartmann, Ralf

    MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract, Germany (Ralf.Hartmann@dlr.de). 1 #12;2 R. HARTMANN quantity under consideration. However, in many

  16. Inflated applicants: Attribution errors in performance evaluation by professionals

    E-Print Network [OSTI]

    Swift, Samuel; Moore, Don; Sharek, Zachariah; Gino, Francesca

    2013-01-01

    performance among applicants from each ‘‘type’’ of school.and interview performance. Each school provided multi-yearschool, PLOS ONE | www.plosone.org July 2013 | Volume 8 | Issue 7 | e69258 Attribution Errors in Performance

  17. Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Milligan, M.

    2011-03-01

    In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

  18. On Student's 1908 Article "The Probable Error of a Mean"

    E-Print Network [OSTI]

    Kim, Jong-Min

    's "attention" resulted in a report, "The Application of the `Law of Error' to the work of the Brewery" dated No] and other records available in their Dublin brewery"; see Pearson 1939, p. 213.) Unable to find

  19. Performance optimizations for compiler-based error detection 

    E-Print Network [OSTI]

    Mitropoulou, Konstantina

    2015-06-29

    The trend towards smaller transistor technologies and lower operating voltages stresses the hardware and makes transistors more susceptible to transient errors. In future systems, performance and power gains will come ...

  20. Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications 

    E-Print Network [OSTI]

    Garcia, Tanya

    2012-10-19

    Many statistical models, like measurement error models, a general class of survival models, and a mixture data model with random censoring, are semiparametric where interest lies in estimating finite-dimensional parameters ...

  1. Error bars for linear and nonlinear neural network regression models

    E-Print Network [OSTI]

    Penny, Will

    Error bars for linear and nonlinear neural network regression models William D. Penny and Stephen J College of Science, Technology and Medicine, London SW7 2BT., U.K. w.penny@ic.ac.uk, s

  2. NOVELTY, CONFIDENCE & ERRORS IN CONNECTIONIST Stephen J. Roberts & William Penny

    E-Print Network [OSTI]

    Roberts, Stephen

    d NOVELTY, CONFIDENCE & ERRORS IN CONNECTIONIST SYSTEMS Stephen J. Roberts & William Penny Neural, Technology & Medicine London, UK s.j.roberts@ic.ac.uk, w.penny@ic.ac.uk April 21, 1997 Abstract Key words

  3. Predicting Intentional Tax Error Using Open Source Literature and Data

    E-Print Network [OSTI]

    for each PUMS respondent (or agent), in certain line item/taxpayer categories, allowing us to construct dis-Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Results of Meta-Analysis 12 6 Intentional Error in Line Items/Taxpayer Categories 13 6

  4. Sampling Report for May-June, 2014 WIPP Samples

    Office of Environmental Management (EM)

    1 L L N L - X X X X - X X X X X Sampling Report for May- June, 2014 WIPP Samples UNCLASSIFIED Forensic Science Center January 8, 2015 Sampling Report for May-June, 2014 WIPP...

  5. Suboptimal quantum-error-correcting procedure based on semidefinite programming

    E-Print Network [OSTI]

    Naoki Yamamoto; Shinji Hara; Koji Tsumura

    2006-06-13

    In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.

  6. TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit

    E-Print Network [OSTI]

    Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired

  7. A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan

    E-Print Network [OSTI]

    Kaeli, David R.

    A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction

  8. A simple real-word error detection and correction using local word bigram and trigram

    E-Print Network [OSTI]

    A simple real-word error detection and correction using local word bigram and trigram Pratip bbcisical@gmail.com Abstract Spelling error is broadly classified in two categories namely non word error and real word error. In this paper a localized real word error detection and correction method is proposed

  9. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect (OSTI)

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  10. Sample introducing apparatus and sample modules for mass spectrometer

    DOE Patents [OSTI]

    Thompson, Cyril V. (Knoxville, TN); Wise, Marcus B. (Kingston, TN)

    1993-01-01

    An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus.

  11. Sample introducing apparatus and sample modules for mass spectrometer

    DOE Patents [OSTI]

    Thompson, C.V.; Wise, M.B.

    1993-12-21

    An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus. 5 figures.

  12. Decoupled Sampling for Graphics Pipelines

    E-Print Network [OSTI]

    Ragan-Kelley, Jonathan Millar

    We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at ...

  13. Sample Environments at Sector 30

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    sample holder designs are below. Aluminum sample holder - custom design Al design Al pic click drawing for .pdf-file Aluminum sample holder - custom design Al design Al pic...

  14. Fluid sampling apparatus and method

    DOE Patents [OSTI]

    Yeamans, D.R.

    1998-02-03

    Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis. 3 figs.

  15. Soil sampling kit and a method of sampling therewith

    DOE Patents [OSTI]

    Thompson, Cyril V. (Knoxville, TN)

    1991-01-01

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.

  16. Soil sampling kit and a method of sampling therewith

    DOE Patents [OSTI]

    Thompson, C.V.

    1991-02-05

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allows an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds. 11 figures.

  17. Monte Carlo sampling from the quantum state space. I

    E-Print Network [OSTI]

    Jiangwei Shang; Yi-Lin Seah; Hui Khoon Ng; David John Nott; Berthold-Georg Englert

    2015-04-27

    High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For a comparison of these sampling methods, we generate sample points in the probability space for two-qubit states probed with a tomographically incomplete measurement, and then use the sample for the calculation of the size and credibility of the recently-introduced optimal error regions [see New J. Phys. 15 (2013) 123026]. Another illustration is the computation of the fractional volume of separable two-qubit states.

  18. Discrete Sampling Test Plan for the 200-BP-5 Operable Unit

    SciTech Connect (OSTI)

    Sweeney, Mark D.

    2010-02-04

    The Discrete Groundwater Sampling Project is conducted by the Pacific Northwest National Laboratory (PNNL) on behalf of CH2M HILL Plateau Remediation Company. The project is focused on delivering groundwater samples from proscribed horizons within select groundwater wells residing in the 200-BP-5 Operable Unit (200-BP-5 OU) on the Hanford Site. This document provides the scope, schedule, methodology, and other details of the PNNL discrete sampling effort.

  19. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-09-12

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). This experiment requires the use of germanium isotopically enriched in 76Ge. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology, possibly one under development at Nonlinear Ion Dynamics (NID), will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL in January 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are reported here. A second sample of isotopically separated copper was provided by NID to PNNL in August 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are also reported here.

  20. Germanium-76 Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Engelhard, Mark H.; Zhu, Zihua

    2011-04-01

    The MAJORANA DEMONSTRATOR is a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). The DEMONSTRATOR will utilize 76Ge from Russia, and the first one gram sample was received from the supplier for analysis on April 24, 2011. The Environmental Molecular Sciences facility, a DOE user facility at PNNL, was used to make the required isotopic and chemical purity measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR. The results of this first analysis are reported here.

  1. Small sample feature selection 

    E-Print Network [OSTI]

    Sima, Chao

    2007-09-17

    that the correction factor is a function of the dimensionality. The estimated standard deviations for the bolstering kernels are thus given by: ?i = ˆd(yi) ?p,i , for i = 1,...,n. (2.8) Clearly, as the number of samples in the training data increases, the standard de..., the DeArray software of the National Human Genome Research Institute calculates a multi-faceted quality metric for each spot [25]. This quality problem is a result of imperfections in RNA preparation, hybridization to the arrays, scanning, and also...

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouth DakotaRobbins and700 GJO-2003-411-TACe: SUBJIHX:ontineSampling at

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouth DakotaRobbins and700 GJO-2003-411-TACe: SUBJIHX:ontineSampling

  4. 2003 CBECS Sample Design

    Gasoline and Diesel Fuel Update (EIA)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of Natural GasAdjustments (Billion Cubic Feet)Decade Year-0ProvedDecade Year-0Cubic MonthlyTechnical Information > Sample

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling at the

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling at

  7. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling at4

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling at4and

  9. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling

  10. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling Rifle,

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield Municipal Gas &SCE-SessionsSouthReport for the t-) S/,,5 'a C O M1 theGroundwater Sampling4

  13. Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors 

    E-Print Network [OSTI]

    Sarkar, Abhra

    2014-06-24

    Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with errors-in-covariates are available ...

  14. Fault tree analysis of commonly occurring medication errors and methods to reduce them 

    E-Print Network [OSTI]

    Cherian, Sandhya Mary

    1994-01-01

    -depth analysis of over two hundred actual medication error incidents. These errors were then classified according to type, in an attempt at deriving a generalized fault tree for the medication delivery system that contributed to errors. This generalized fault...

  15. EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITY OF DIPOLE MAGNETS FOR THE SSC

    E-Print Network [OSTI]

    Meuser, R.B.

    2010-01-01

    in Fig. 2. Table 2. Manufacturing Error Mode Groups13-16, 1985 EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITYMag Note-27 EFFECT OF MANUFACTURING ERRORS ON FIELO QUALITY

  16. LOW-POWER METHODOLOGY FOR FAULT TOLERANT NANOSCALE MEMORY DESIGN

    E-Print Network [OSTI]

    Kim, Seokjoong

    2012-01-01

    Outline . . . . . . . . . . . . . . . . . . . . . . . Low-Methodologies for Low-Power Memory . . . . . . . . . . . . .Scaling for SEU-Tolerance in Low-Power Memories 7.1 Soft-

  17. Quality Guidline for Cost Estimation Methodology for NETL Assessments...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Benefits 2 Power Plant Cost Estimation Methodology Quality Guidelines for Energy System Studies April 2011 Disclaimer This report was prepared as an account of work...

  18. Energy Efficiency Standards for Refrigerators in Brazil: A Methodology...

    Open Energy Info (EERE)

    Energy Efficiency Standards for Refrigerators in Brazil: A Methodology for Impact Evaluation Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Efficiency Standards...

  19. National Academies Criticality Methodology and Assessment Video (Text Version)

    Broader source: Energy.gov [DOE]

    This is a text version of the "National Academies Criticality Methodology and Assessment" video presented at the Critical Materials Workshop, held on April 3, 2012 in Arlington, Virginia.

  20. Security Risk Assessment Methodologies (RAM) for Critical Infrastructu...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Building Energy Efficiency Find More Like This Return to Search Security Risk Assessment Methodologies (RAM) for Critical Infrastructures Sandia National Laboratories...

  1. Egs Exploration Methodology Project Using the Dixie Valley Geothermal...

    Open Energy Info (EERE)

    Egs Exploration Methodology Project Using the Dixie Valley Geothermal System, Nevada, Status Update Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference...

  2. Survey of Transmission Cost Allocation Methodologies for Regional Transmission Organizations

    SciTech Connect (OSTI)

    Fink, S.; Porter, K.; Mudd, C.; Rogers, J.

    2011-02-01

    The report presents transmission cost allocation methodologies for reliability transmission projects, generation interconnection, and economic transmission projects for all Regional Transmission Organizations.

  3. After a Disaster: Lessons in Survey Methodology from Hurricane Katrina

    E-Print Network [OSTI]

    Swanson, David A; Henderson, Tammy; Sirois, Maria; Chen, Angela; Airriess, Christopher; Banks, David

    2009-01-01

    of Labor. (2005). Effects of Hurricane Katrina on local areaSurvey Methodology from Hurricane Katrina Tammy L. Hendersonto study the impact of Hurricane Katrina. The current

  4. Northern Marshall Islands radiological survey: sampling and analysis summary

    SciTech Connect (OSTI)

    Robison, W.L.; Conrado, C.L.; Eagle, R.J.; Stuart, M.L.

    1981-07-23

    A radiological survey was conducted in the Northern Marshall Islands to document reamining external gamma exposures from nuclear tests conducted at Enewetak and Bikini Atolls. An additional program was later included to obtain terrestrial and marine samples for radiological dose assessment for current or potential atoll inhabitants. This report is the first of a series summarizing the results from the terrestrial and marine surveys. The sample collection and processing procedures and the general survey methodology are discussed; a summary of the collected samples and radionuclide analyses is presented. Over 5400 samples were collected from the 12 atolls and 2 islands and prepared for analysis including 3093 soil, 961 vegetation, 153 animal, 965 fish composite samples (average of 30 fish per sample), 101 clam, 50 lagoon water, 15 cistern water, 17 groundwater, and 85 lagoon sediment samples. A complete breakdown by sample type, atoll, and island is given here. The total number of analyses by radionuclide are 8840 for /sup 241/Am, 6569 for /sup 137/Cs, 4535 for /sup 239 +240/Pu, 4431 for /sup 90/Sr, 1146 for /sup 238/Pu, 269 for /sup 241/Pu, and 114 each for /sup 239/Pu and /sup 240/Pu. A complete breakdown by sample category, atoll or island, and radionuclide is also included.

  5. Assessment of outdoor radiofrequency electromagnetic field exposure through hotspot localization using kriging-based sequential sampling

    SciTech Connect (OSTI)

    Aerts, Sam, E-mail: sam.aerts@intec.ugent.be; Deschrijver, Dirk; Verloock, Leen; Dhaene, Tom; Martens, Luc; Joseph, Wout

    2013-10-15

    In this study, a novel methodology is proposed to create heat maps that accurately pinpoint the outdoor locations with elevated exposure to radiofrequency electromagnetic fields (RF-EMF) in an extensive urban region (or, hotspots), and that would allow local authorities and epidemiologists to efficiently assess the locations and spectral composition of these hotspots, while at the same time developing a global picture of the exposure in the area. Moreover, no prior knowledge about the presence of radiofrequency radiation sources (e.g., base station parameters) is required. After building a surrogate model from the available data using kriging, the proposed method makes use of an iterative sampling strategy that selects new measurement locations at spots which are deemed to contain the most valuable information—inside hotspots or in search of them—based on the prediction uncertainty of the model. The method was tested and validated in an urban subarea of Ghent, Belgium with a size of approximately 1 km{sup 2}. In total, 600 input and 50 validation measurements were performed using a broadband probe. Five hotspots were discovered and assessed, with maximum total electric-field strengths ranging from 1.3 to 3.1 V/m, satisfying the reference levels issued by the International Commission on Non-Ionizing Radiation Protection for exposure of the general public to RF-EMF. Spectrum analyzer measurements in these hotspots revealed five radiofrequency signals with a relevant contribution to the exposure. The radiofrequency radiation emitted by 900 MHz Global System for Mobile Communications (GSM) base stations was always dominant, with contributions ranging from 45% to 100%. Finally, validation of the subsequent surrogate models shows high prediction accuracy, with the final model featuring an average relative error of less than 2 dB (factor 1.26 in electric-field strength), a correlation coefficient of 0.7, and a specificity of 0.96. -- Highlights: • We present an iterative measurement and modeling method for outdoor RF-EMF exposure. • Hotspots are rapidly identified, and accurately characterized. • An accurate graphical representation, or heat map, is created, using kriging. • Random validation shows good correlation (0.7) and low relative errors (2 dB)

  6. Entanglement sampling and applications

    E-Print Network [OSTI]

    Frédéric Dupuis; Omar Fawzi; Stephanie Wehner

    2015-06-15

    A natural measure for the amount of quantum information that a physical system E holds about another system A = A_1,...,A_n is given by the min-entropy Hmin(A|E). Specifically, the min-entropy measures the amount of entanglement between E and A, and is the relevant measure when analyzing a wide variety of problems ranging from randomness extraction in quantum cryptography, decoupling used in channel coding, to physical processes such as thermalization or the thermodynamic work cost (or gain) of erasing a quantum system. As such, it is a central question to determine the behaviour of the min-entropy after some process M is applied to the system A. Here we introduce a new generic tool relating the resulting min-entropy to the original one, and apply it to several settings of interest, including sampling of subsystems and measuring in a randomly chosen basis. The sampling results lead to new upper bounds on quantum random access codes, and imply the existence of "local decouplers". The results on random measurements yield new high-order entropic uncertainty relations with which we prove the optimality of cryptographic schemes in the bounded quantum storage model.

  7. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-02-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). This experiment requires the use of germanium isotopically enriched in 76Ge. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  8. Sample holder with optical features

    DOE Patents [OSTI]

    Milas, Mirko; Zhu, Yimei; Rameau, Jonathan David

    2013-07-30

    A sample holder for holding a sample to be observed for research purposes, particularly in a transmission electron microscope (TEM), generally includes an external alignment part for directing a light beam in a predetermined beam direction, a sample holder body in optical communication with the external alignment part and a sample support member disposed at a distal end of the sample holder body opposite the external alignment part for holding a sample to be analyzed. The sample holder body defines an internal conduit for the light beam and the sample support member includes a light beam positioner for directing the light beam between the sample holder body and the sample held by the sample support member.

  9. Structure of minimum-error quantum state discrimination

    E-Print Network [OSTI]

    Joonwoo Bae

    2013-07-19

    Distinguishing different quantum states is a fundamental task having practical applications for information processing. Despite the efforts devoted so far, however, strategies for optimal discrimination are known only for specific examples. We here consider the problem of minimum-error quantum state discrimination where the average error is attempted to be minimized. We show the general structure of minimum-error state discrimination as well as useful properties to derive analytic solutions. Based on the general structure, we present a geometric formulation of the problem, which can be applied to cases where quantum state geometry is clear. We also introduce equivalent classes of sets of quantum states in terms of minimum-error discrimination: sets of quantum states in an equivalence class share the same guessing probability. In particular, for qubit states where the state geometry is found with the Bloch sphere, we illustrate that for an arbitrary set of qubit states, the minimum-error state discrimination with equal prior probabilities can be analytically solved, that is, optimal measurement and the guessing probability are explicitly obtained.

  10. Economic penalties of problems and errors in solar energy systems

    SciTech Connect (OSTI)

    Raman, K.; Sparkes, H.R.

    1983-01-01

    Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.

  11. Goal-oriendted local a posteriori error estimator for H(div)

    E-Print Network [OSTI]

    2011-12-15

    Dec 15, 2011 ... error estimator measures the pollution effect from the outside region of D ... error estimators which account for and quantify the pollution effect.

  12. V-228: RealPlayer Buffer Overflow and Memory Corruption Error...

    Broader source: Energy.gov (indexed) [DOE]

    a memory corruption error and execute arbitrary code on the target system. IMPACT: Access control error SOLUTION: vendor recommends upgrading to version 16.0.3.51 Addthis...

  13. Fluid sampling tool

    DOE Patents [OSTI]

    Johnston, Roger G. (Los Alamos, NM); Garcia, Anthony R. E. (Espanola, NM); Martinez, Ronald K. (Santa Cruz, NM)

    2001-09-25

    The invention includes a rotatable tool for collecting fluid through the wall of a container. The tool includes a fluid collection section with a cylindrical shank having an end portion for drilling a hole in the container wall when the tool is rotated, and a threaded portion for tapping the hole in the container wall. A passageway in the shank in communication with at least one radial inlet hole in the drilling end and an opening at the end of the shank is adapted to receive fluid from the container. The tool also includes a cylindrical chamber affixed to the end of the shank opposite to the drilling portion thereof for receiving and storing fluid passing through the passageway. The tool also includes a flexible, deformable gasket that provides a fluid-tight chamber to confine kerf generated during the drilling and tapping of the hole. The invention also includes a fluid extractor section for extracting fluid samples from the fluid collecting section.

  14. Single point aerosol sampling: Evaluation of mixing and probe performance in a nuclear stack

    SciTech Connect (OSTI)

    Rodgers, J.C.; Fairchild, C.I.; Wood, G.O.; Ortiz, C.A.; Muyshondt, A.

    1996-01-01

    Alternative reference methodologies have been developed for sampling of radionuclides from stacks and ducts, which differ from the methods previously required by the United States Environmental Protection Agency. These alternative reference methodologies have recently been approved by the U.S. EPA for use in lieu of the current standard techniques. The standard EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative reference methodologies are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of some aspects of the alternative reference methodologies. Coefficients of variation of velocity, tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed that numerical criteria placed upon the coefficients of variation by the alternative reference methodologies were met at sampling stations located 9 and 14 stack diameters from the flow entrance, but not at a location that was 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 {mu}m aerodynamic diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L min{sup {minus}1} (4-cfm) anistokinetic shrouded probe, but only 20% for an isokinetic probe that follows the existing EPA standard requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the alternative reference methodologies criteria; however, the isokinetic probes would not. 13 refs., 9 figs., 1 tab.

  15. Reducing collective quantum state rotation errors with reversible dephasing

    SciTech Connect (OSTI)

    Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.

    2014-12-29

    We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21?dB in the context of collective population measurements of the spin states of an ensemble of 2.1×10{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.

  16. Characterization of quantum dynamics using quantum error correction

    E-Print Network [OSTI]

    S. Omkar; R. Srikanth; S. Banerjee

    2015-01-27

    Characterizing noisy quantum processes is important to quantum computation and communication (QCC), since quantum systems are generally open. To date, all methods of characterization of quantum dynamics (CQD), typically implemented by quantum process tomography, are \\textit{off-line}, i.e., QCC and CQD are not concurrent, as they require distinct state preparations. Here we introduce a method, "quantum error correction based characterization of dynamics", in which the initial state is any element from the code space of a quantum error correcting code that can protect the state from arbitrary errors acting on the subsystem subjected to the unknown dynamics. The statistics of stabilizer measurements, with possible unitary pre-processing operations, are used to characterize the noise, while the observed syndrome can be used to correct the noisy state. Our method requires at most $2(4^n-1)$ configurations to characterize arbitrary noise acting on $n$ qubits.

  17. The knowledge-based content management application design methodology

    E-Print Network [OSTI]

    Chen-Burger, Yun-Heh (Jessica)

    of the development life cycle of a software system based on the ICONS framework. The result of the work presented life cycle methodology. It allows the organisation that implements a knowledge management software two chapters are the presentation of the general context for life cycle and management methodologies

  18. Design Methodology for Unmannded Aerial Vehicle (UAV) Team Coordination

    E-Print Network [OSTI]

    Cummings, Mary "Missy"

    1 Design Methodology for Unmannded Aerial Vehicle (UAV) Team Coordination F.B. da Silva S.D. Scott-mail: halab@mit.edu #12;2 Design Methodology for Unmannded Aerial Vehicle (UAV) Team Coordination by F.B. da Silva, S.D. Scott, and M.L. Cummings Executive Summary Unmanned Aerial Vehicle (UAV) systems, despite

  19. Methodological advances in computer simulation of biomolecular systems

    E-Print Network [OSTI]

    Fischer, Wolfgang

    Methodological advances in computer simulation of biomolecular systems Wilfred F. van Gunsteren Computer simulation of the dynamics of biomolecular systems by the molecular dynamics technique yields computing power. Recent advances in simulation methodology e.g. to rapidly compute many free energies from

  20. Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity

    E-Print Network [OSTI]

    Report IEA-PVPS T12-03:2011 #12;IEA-PVPS-TASK 12 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity #12;IEA-PVPS-TASK 12 Methodology Guidelines on Life Cycle Assessment Guidelines on Life-Cycle Assessment of Photovoltaic Electricity IEA PVPS Task 12, Subtask 20, LCA

  1. APPENDIX B: RADIOLOGICAL DATA METHODOLOGIES 1998 SITE ENVIRONMENTAL REPORTB-1

    E-Print Network [OSTI]

    APPENDIX B: RADIOLOGICAL DATA METHODOLOGIES 1998 SITE ENVIRONMENTAL REPORTB-1 APPENDIX B Radiological Data Methodologies 1. DOSE CALCULATION - ATMOSPHERIC RELEASE PATHWAY Dispersion of airborne and distance. Facility-specific radionuclide release rates (in Ci per year) were also used. All annual site

  2. Design Methodology for a Very High Frequency Resonant Boost Converter

    E-Print Network [OSTI]

    Perreault, Dave

    Design Methodology for a Very High Frequency Resonant Boost Converter Justin M. Burkhart , Roman@ti.com Abstract--This document introduces a design methodology for a resonant boost converter topology demonstrated for boost conversion at frequencies up to 110MHz using a resonant boost converter topology in [5

  3. ORNL/TM-2008/105 Cost Methodology for Biomass

    E-Print Network [OSTI]

    Pennycook, Steve

    ORNL/TM-2008/105 Cost Methodology for Biomass Feedstocks: Herbaceous Crops and Agricultural Resource and Engineering Systems Environmental Sciences Division COST METHODOLOGY FOR BIOMASS FEESTOCKS ....................................................................................................... 3 2.1.1 Integrated Biomass Supply Analysis and Logistics Model (IBSAL).......................... 6 2

  4. Factorization of correspondence and camera error for unconstrained dense correspondence applications

    SciTech Connect (OSTI)

    Knoblauch, D; Hess-Flores, M; Duchaineau, M; Kuester, F

    2009-09-29

    A correspondence and camera error analysis for dense correspondence applications such as structure from motion is introduced. This provides error introspection, opening up the possibility of adaptively and progressively applying more expensive correspondence and camera parameter estimation methods to reduce these errors. The presented algorithm evaluates the given correspondences and camera parameters based on an error generated through simple triangulation. This triangulation is based on the given dense, non-epipolar constraint, correspondences and estimated camera parameters. This provides an error map without requiring any information about the perfect solution or making assumptions about the scene. The resulting error is a combination of correspondence and camera parameter errors. An simple, fast low/high pass filter error factorization is introduced, allowing for the separation of correspondence error and camera error. Further analysis of the resulting error maps is applied to allow efficient iterative improvement of correspondences and cameras.

  5. Methodology for Preliminary Design of Electrical Microgrids

    SciTech Connect (OSTI)

    Jensen, Richard P.; Stamp, Jason E.; Eddy, John P.; Henry, Jordan M; Munoz-Ramos, Karina; Abdallah, Tarek

    2015-09-30

    Many critical loads rely on simple backup generation to provide electricity in the event of a power outage. An Energy Surety Microgrid TM can protect against outages caused by single generator failures to improve reliability. An ESM will also provide a host of other benefits, including integration of renewable energy, fuel optimization, and maximizing the value of energy storage. The ESM concept includes a categorization for microgrid value proposi- tions, and quantifies how the investment can be justified during either grid-connected or utility outage conditions. In contrast with many approaches, the ESM approach explic- itly sets requirements based on unlikely extreme conditions, including the need to protect against determined cyber adversaries. During the United States (US) Department of Defense (DOD)/Department of Energy (DOE) Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) effort, the ESM methodology was successfully used to develop the preliminary designs, which direct supported the contracting, construction, and testing for three military bases. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military installations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Melanie Johnson and Harold Sanborn of the U.S. Army Corps of Engineers Construc- tion Engineering Research Laboratory * Experts from the National Renewable Energy Laboratory, Idaho National Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory

  6. Probabilistic fatigue methodology and wind turbine reliability

    SciTech Connect (OSTI)

    Lange, C.H. [Stanford Univ., CA (United States)

    1996-05-01

    Wind turbines subjected to highly irregular loadings due to wind, gravity, and gyroscopic effects are especially vulnerable to fatigue damage. The objective of this study is to develop and illustrate methods for the probabilistic analysis and design of fatigue-sensitive wind turbine components. A computer program (CYCLES) that estimates fatigue reliability of structural and mechanical components has been developed. A FORM/SORM analysis is used to compute failure probabilities and importance factors of the random variables. The limit state equation includes uncertainty in environmental loading, gross structural response, and local fatigue properties. Several techniques are shown to better study fatigue loads data. Common one-parameter models, such as the Rayleigh and exponential models are shown to produce dramatically different estimates of load distributions and fatigue damage. Improved fits may be achieved with the two-parameter Weibull model. High b values require better modeling of relatively large stress ranges; this is effectively done by matching at least two moments (Weibull) and better by matching still higher moments. For this purpose, a new, four-moment {open_quotes}generalized Weibull{close_quotes} model is introduced. Load and resistance factor design (LRFD) methodology for design against fatigue is proposed and demonstrated using data from two horizontal-axis wind turbines. To estimate fatigue damage, wind turbine blade loads have been represented by their first three statistical moments across a range of wind conditions. Based on the moments {mu}{sub 1}{hor_ellipsis}{mu}{sub 3}, new {open_quotes}quadratic Weibull{close_quotes} load distribution models are introduced. The fatigue reliability is found to be notably affected by the choice of load distribution model.

  7. Methodology for Scaling Fusion Power Plant Availability

    SciTech Connect (OSTI)

    Lester M. Waganer

    2011-01-04

    Normally in the U.S. fusion power plant conceptual design studies, the development of the plant availability and the plant capital and operating costs makes the implicit assumption that the plant is a 10th of a kind fusion power plant. This is in keeping with the DOE guidelines published in the 1970s, the PNL report1, "Fusion Reactor Design Studies - Standard Accounts for Cost Estimates. This assumption specifically defines the level of the industry and technology maturity and eliminates the need to define the necessary research and development efforts and costs to construct a one of a kind or the first of a kind power plant. It also assumes all the "teething" problems have been solved and the plant can operate in the manner intended. The plant availability analysis assumes all maintenance actions have been refined and optimized by the operation of the prior nine or so plants. The actions are defined to be as quick and efficient as possible. This study will present a methodology to enable estimation of the availability of the one of a kind (one OAK) plant or first of a kind (1st OAK) plant. To clarify, one of the OAK facilities might be the pilot plant or the demo plant that is prototypical of the next generation power plant, but it is not a full-scale fusion power plant with all fully validated "mature" subsystems. The first OAK facility is truly the first commercial plant of a common design that represents the next generation plant design. However, its subsystems, maintenance equipment and procedures will continue to be refined to achieve the goals for the 10th OAK power plant.

  8. Algebraic Quarks from the Tangent Bundle: Methodology

    E-Print Network [OSTI]

    Jose G. Vargas

    2015-09-29

    In a previous paper, we developed a table of components of algebraic solutions of a system of equations generated by an inhomogeneous proper-value equation involving K\\"ahler's total angular momentum. This table looks as if it were a representation of real life quarks. We did not consider all options for solutions of the system of equations that gave rise to it. We shall not, therefore, claim that the present distribution of those components as a well ordered table has strict physical relevance. It, however, is of great interest for the purpose of developing methodology, which may then be used for other solutions. We insert into our present table concepts that parallel those of the phenomenology of HEP: generations, color, flavor, isospin, etc. Breaking then loose from that distribution, we consider simpler alternatives for algebraic "quarks" of primary color (The mathematics speaks of each generation having its own primary color). We use them to show how stoichiometric argument allows one to reach what appear to be esthetically appealing idempotent representation of particles for other than electrons and positrons (K\\"ahler already provided these half a century ago with idempotents similar to our hypothetical quarks). We then use neutron decay to obtain formulas for also hypothetical algebraic neutrinos and the $Z_{0}$, and use pair annihilation to obtain formulas for gamma particles. We go back to the system of equations and develop an alternative option. We solve the system but stop short of studying it along the present line. This will thus be an easy entry point to this theory by HEP physicists, who will be able to go aster and further.

  9. Full protection of superconducting qubit systems from coupling errors

    E-Print Network [OSTI]

    M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley

    2005-08-09

    Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.

  10. When soft controls get slippery: User interfaces and human error

    SciTech Connect (OSTI)

    Stubler, W.F.; O`Hara, J.M.

    1998-12-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety.

  11. Comment on "Optimum Quantum Error Recovery using Semidefinite Programming"

    E-Print Network [OSTI]

    M. Reimpell; R. F. Werner; K. Audenaert

    2006-06-07

    In a recent paper ([1]=quant-ph/0606035) it is shown how the optimal recovery operation in an error correction scheme can be considered as a semidefinite program. As a possible future improvement it is noted that still better error correction might be obtained by optimizing the encoding as well. In this note we present the result of such an improvement, specifically for the four-bit correction of an amplitude damping channel considered in [1]. We get a strict improvement for almost all values of the damping parameter. The method (and the computer code) is taken from our earlier study of such correction schemes (quant-ph/0307138).

  12. Error-prevention scheme with two pairs of qubits

    E-Print Network [OSTI]

    Chu, Shih-I; Yang, Chui-Ping; Han, Siyuan

    2002-09-04

    Ei jue ie j&5ue je i& , e iP$0,1% @6#!. The expressions for HS and HSB are as follows: HS5e0~s I z 1s II z !, *Email address: cpyang@floquet.chem.ku.edu †Email address: sichu@ku.edu ‡ Email address: han@ku.eduError-prevention scheme Chui-Ping Yang.... The sche two pairs of qubits and through error-prevention proc through a decoherence-free subspace for collective p pairs; leakage out of the encoding space due to amp addition, how to construct decoherence-free states for n discussed. DOI: 10.1103/Phys...

  13. Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 0

    SciTech Connect (OSTI)

    Marutzky, Sam; Farnham, Irene

    2014-10-01

    The purpose of the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan) is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP). The Plan’s scope comprises sample collection and analysis requirements relevant to assessing the extent of groundwater contamination from underground nuclear testing. This Plan identifies locations to be sampled by corrective action unit (CAU) and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well-purging requirements, detection levels, and accuracy requirements; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling of interest to UGTA. This Plan does not address compliance with requirements for wells that supply the NNSS public water system or wells involved in a permitted activity.

  14. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect (OSTI)

    Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.

  15. SU-E-T-170: Evaluation of Rotational Errors in Proton Therapy Planning of Lung Cancer

    SciTech Connect (OSTI)

    Rana, S; Zhao, L; Ramirez, E; Singh, H; Zheng, Y

    2014-06-01

    Purpose: To investigate the impact of rotational (roll, yaw, and pitch) errors in proton therapy planning of lung cancer. Methods: A lung cancer case treated at our center was used in this retrospective study. The original plan was generated using two proton fields (posterior-anterior and left-lateral) with XiO treatment planning system (TPS) and delivered using uniform scanning proton therapy system. First, the computed tomography (CT) set of original lung treatment plan was re-sampled for rotational (roll, yaw, and pitch) angles ranged from ?5° to +5°, with an increment of 2.5°. Second, 12 new proton plans were generated in XiO using the 12 re-sampled CT datasets. The same beam conditions, isocenter, and devices were used in new treatment plans as in the original plan. All 12 new proton plans were compared with original plan for planning target volume (PTV) coverage and maximum dose to spinal cord (cord Dmax). Results: PTV coverage was reduced in all 12 new proton plans when compared to that of original plan. Specifically, PTV coverage was reduced by 0.03% to 1.22% for roll, by 0.05% to 1.14% for yaw, and by 0.10% to 3.22% for pitch errors. In comparison to original plan, the cord Dmax in new proton plans was reduced by 8.21% to 25.81% for +2.5° to +5° pitch, by 5.28% to 20.71% for +2.5° to +5° yaw, and by 5.28% to 14.47% for ?2.5° to ?5° roll. In contrast, cord Dmax was increased by 3.80% to 3.86% for ?2.5° to ?5° pitch, by 0.63% to 3.25% for ?2.5° to ?5° yaw, and by 3.75% to 4.54% for +2.5° to +5° roll. Conclusion: PTV coverage was reduced by up to 3.22% for rotational error of 5°. The cord Dmax could increase or decrease depending on the direction of rotational error, beam angles, and the location of lung tumor.

  16. Status of Activities to Implement a Sustainable System of MC&A Equipment and Methodological Support at Rosatom Facilities

    SciTech Connect (OSTI)

    J.D. Sanders

    2010-07-01

    Under the U.S.-Russian Material Protection, Control and Accounting (MPC&A) Program, the Material Control and Accounting Measurements (MCAM) Project has supported a joint U.S.-Russian effort to coordinate improvements of the Russian MC&A measurement system. These efforts have resulted in the development of a MC&A Equipment and Methodological Support (MEMS) Strategic Plan (SP), developed by the Russian MEM Working Group. The MEMS SP covers implementation of MC&A measurement equipment, as well as the development, attestation and implementation of measurement methodologies and reference materials at the facility and industry levels. This paper provides an overview of the activities conducted under the MEMS SP, as well as a status on current efforts to develop reference materials, implement destructive and nondestructive assay measurement methodologies, and implement sample exchange, scrap and holdup measurement programs across Russian nuclear facilities.

  17. Mapping Transmission Risk of Lassa Fever in West Africa: The Importance of Quality Control, Sampling Bias, and Error Weighting

    E-Print Network [OSTI]

    Peterson, A. Townsend; Moses, Lina M.; Bausch, Daniel G.

    2014-08-08

    Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well...

  18. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    SciTech Connect (OSTI)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  19. Specified assurance level sampling procedure

    SciTech Connect (OSTI)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level.

  20. Contributions to Human Errors and Breaches in National Security Applications.

    SciTech Connect (OSTI)

    Pond, D. J.; Houghton, F. K.; Gilmore, W. E.

    2002-01-01

    Los Alamos National Laboratory has recognized that security infractions are often the consequence of various types of human errors (e.g., mistakes, lapses, slips) and/or breaches (i.e., deliberate deviations from policies or required procedures with no intention to bring about an adverse security consequence) and therefore has established an error reduction program based in part on the techniques used to mitigate hazard and accident potentials. One cornerstone of this program, definition of the situational and personal factors that increase the likelihood of employee errors and breaches, is detailed here. This information can be used retrospectively (as in accident investigations) to support and guide inquiries into security incidents or prospectively (as in hazard assessments) to guide efforts to reduce the likelihood of error/incident occurrence. Both approaches provide the foundation for targeted interventions to reduce the influence of these factors and for the formation of subsequent 'lessons learned.' Overall security is enhanced not only by reducing the inadvertent releases of classified information but also by reducing the security and safeguards resources devoted to them, thereby allowing these resources to be concentrated on acts of malevolence.

  1. Backward Error and Condition of Polynomial Eigenvalue Problems \\Lambda

    E-Print Network [OSTI]

    Higham, Nicholas J.

    , 1999 Abstract We develop normwise backward errors and condition numbers for the polyno­ mial eigenvalue Research Council grant GR/L76532. 1 #12; where A l 2 C n\\Thetan , l = 0: m and we refer to P as a â??­matrix. Few direct numerical methods are available for solving the polynomial eigenvalue problem (PEP). When m

  2. DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR

    E-Print Network [OSTI]

    Sambridge, Malcolm

    DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library

  3. Rate Regions for Coherent and Noncoherent Multisource Network Error Correction

    E-Print Network [OSTI]

    Ho, Tracey

    ,tho,effros}@caltech.edu Joerg Kliewer New Mexico State University Email: jkliewer@nmsu.edu Elona Erez Yale University Email a single error on a network link may lead to a corruption of many received packets at the destination nodes

  4. Optimal Estimation from Relative Measurements: Error Scaling (Extended Abstract)

    E-Print Network [OSTI]

    Hespanha, João Pedro

    "relative" measurement between xu and xv is available: uv = xu - xv + u,v Rk , (u, v) E V × V, (1) whereOptimal Estimation from Relative Measurements: Error Scaling (Extended Abstract) Prabir Barooah Jo~ao P. Hespanha I. ESTIMATION FROM RELATIVE MEASUREMENTS We consider the problem of estimating a number

  5. Low Degree Test with Polynomially Small Error Dana Moshkovitz

    E-Print Network [OSTI]

    Moshkovitz, Dana

    Low Degree Test with Polynomially Small Error Dana Moshkovitz October 19, 2014 Abstract A long line of work in Theoretical Computer Science shows that a function is close to a low degree polynomial iff it is close to a low degree polynomial locally. This is known as low degree testing

  6. Time reversal in thermoacoustic tomography - an error estimate

    E-Print Network [OSTI]

    Hristova, Yulia

    2008-01-01

    The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.

  7. Error Control Based Model Reduction for Parameter Optimization of Elliptic

    E-Print Network [OSTI]

    of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material

  8. Improving STT-MRAM Density Through Multibit Error Correction

    E-Print Network [OSTI]

    Sapatnekar, Sachin

    . Traditional methods enhance robustness at the cost of area/energy by using larger cell sizes to improve the thermal stability of the MTJ cells. This paper employs multibit error correction with DRAM to the read operation) through TX. A key attribute of an MTJ is the notion of thermal stability. Fig. 2

  9. Designing Automation to Reduce Operator Errors Nancy G. Leveson

    E-Print Network [OSTI]

    Leveson, Nancy

    Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of mode­related problems [SW95]. After studying accidents and incidents in the new, highly automated

  10. ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error

    E-Print Network [OSTI]

    Richardson, David

    ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error Detection and Recovery for Robot Motion, and uncertainty in the geometric * This report describes research done at the Artificial Intelligence Laboratory of the Massach- usetts Institute of Technology. Support for the Laboratory's Artificial Intelligence research

  11. Control del Error para la Multirresoluci on Quincunx a la

    E-Print Network [OSTI]

    Amat, Sergio

    multirresoluci#19;on discreta no lineal de Harten. En los algoritmos de multirresoluci#19;on se transforma una obtiene ^ f L la cual debera de estar cerca de #22; f L . Por lo tanto, los algoritmos no deben de ser inestables. En este estudio, introduciremos algoritmos de control del error y de la estabilidad. Se obtendr

  12. Error Bounds from Extra Precise Iterative Refinement James Demmel

    E-Print Network [OSTI]

    Li, Xiaoye Sherry

    now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5 Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF-0444486; the DOE Grant Nos. DE-FG03

  13. Error rate and power dissipation in nano-logic devices 

    E-Print Network [OSTI]

    Kim, Jong Un

    2005-08-29

    of an error-free condition on temperature in single electron logic processors is derived. The size of the quantum dot of single electron transistor is predicted when a single electron logic processor with the a billion single electron transistors works without...

  14. Error rate and power dissipation in nano-logic devices 

    E-Print Network [OSTI]

    Kim, Jong Un

    2004-01-01

    -free condition on temperature in single electron logic processors is derived. The size of the quantum dot of a single electron transistor is predicted when a single electron logic processor with the 10? single electron transistors works without error at room...

  15. Urban Water Demand with Periodic Error Correction David R. Bell

    E-Print Network [OSTI]

    Griffin, Ronald

    them. Econometric estimates of residential demand for water abound (Dalhuisen et al. 2003Urban Water Demand with Periodic Error Correction by David R. Bell and Ronald C. Griffin February, Department of Agricultural Economics, Texas A&M University. #12;Abstract Monthly demand for publicly supplied

  16. Errors-in-variables problems in transient electromagnetic mineral exploration

    E-Print Network [OSTI]

    Braslavsky, Julio H.

    Errors-in-variables problems in transient electromagnetic mineral exploration K. Lau, J. H in transient electromagnetic mineral exploration. A specific sub-problem of interest in this area geological surveys, dia- mond drilling, and airborne mineral exploration. Our interest here is with ground

  17. Energy efficiency of error correction for wireless communication

    E-Print Network [OSTI]

    Havinga, Paul J.M.

    -control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA ­ 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost

  18. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    E-Print Network [OSTI]

    California at Davis, University of

    Error Control of Iterative Linear Solvers for Integrated Groundwater Models by Matthew F. Dixon1 for integrated groundwater models, which are implicitly coupled to another model, such as surface water models in legacy groundwater modeling packages, resulting in the overall simulation speedups as large as 7

  19. Estimating the error distribution function in nonparametric regression

    E-Print Network [OSTI]

    Mueller, Uschi

    Schick, Wolfgang Wefelmeyer Summary: We construct an efficient estimator for the error distribution estimator, influence function #12;2 M¨uller - Schick - Wefelmeyer M¨uller, Schick and Wefelmeyer (2004a. We refer also to the introduction of M¨uller, Schick and Wefelmeyer (2004b). Our proof is complicat

  20. Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications

    E-Print Network [OSTI]

    Polz, Martin

    Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications Stelios CSAIL, Cambridge, MA, USA Abstract We present Code Phage (CP), a system for automatically transferring. To the best of our knowledge, CP is the first system to automatically transfer code across multiple

  1. Error field and magnetic diagnostic modeling for W7-X

    SciTech Connect (OSTI)

    Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  2. Development of an Expert System for Classification of Medical Errors

    E-Print Network [OSTI]

    Kopec, Danny

    in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors

  3. The contour method cutting assumption: error minimization and correction

    SciTech Connect (OSTI)

    Prime, Michael B; Kastengren, Alan L

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  4. Selected CRC Polynomials Can Correct Errors and Thus Reduce Retransmission

    E-Print Network [OSTI]

    Mache, Jens

    sensor networks, minimizing communication is crucial to improve energy consumption and thus lifetime Correction, Reliability, Network Protocol, Low Power Comsumption I. INTRODUCTION Error detection using Cyclic of retransmitting the whole packet - improves energy consumption and thus lifetime of wireless sensor networks

  5. A Spline Algorithm for Modeling Cutting Errors Turning Centers

    E-Print Network [OSTI]

    Gilsinn, David E.

    . Bandy Automated Production Technology Division National Institute of Standards and Technology 100 Bureau are made up of features with profiles defined by arcs and lines. An error model for turned parts must take. In the case where there is a requirement of tangency between two features, such as a line tangent to an arc

  6. 3 - DJ : sampling as design

    E-Print Network [OSTI]

    Patel, Sayjel Vijay

    2015-01-01

    3D Sampling is introduced as a new spatial craft that can be applied to architectural design, akin to how sampling is applied in the field of electronic music. Through the development of 3-DJ, a prototype design software, ...

  7. Radiochemical Analysis Methodology for uranium Depletion Measurements

    SciTech Connect (OSTI)

    Scatena-Wachel DE

    2007-01-09

    This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

  8. Tracking granules at the Sun's surface and reconstructing velocity fields. II. Error analysis

    E-Print Network [OSTI]

    R. Tkaczuk; M. Rieutord; N. Meunier; T. Roudier

    2007-07-13

    The determination of horizontal velocity fields at the solar surface is crucial to understanding the dynamics and magnetism of the convection zone of the sun. These measurements can be done by tracking granules. Tracking granules from ground-based observations, however, suffers from the Earth's atmospheric turbulence, which induces image distortion. The focus of this paper is to evaluate the influence of this noise on the maps of velocity fields. We use the coherent structure tracking algorithm developed recently and apply it to two independent series of images that contain the same solar signal. We first show that a k-\\omega filtering of the times series of images is highly recommended as a pre-processing to decrease the noise, while, in contrast, using destretching should be avoided. We also demonstrate that the lifetime of granules has a strong influence on the error bars of velocities and that a threshold on the lifetime should be imposed to minimize errors. Finally, although solar flow patterns are easily recognizable and image quality is very good, it turns out that a time sampling of two images every 21 s is not frequent enough, since image distortion still pollutes velocity fields at a 30% level on the 2500 km scale, i.e. the scale on which granules start to behave like passive scalars. The coherent structure tracking algorithm is a useful tool for noise control on the measurement of surface horizontal solar velocity fields when at least two independent series are available.

  9. EMERGING MODALITIES FOR SOIL CARBON ANALYSIS: SAMPLING STATISTICS AND ECONOMICS WORKSHOP.

    SciTech Connect (OSTI)

    WIELOPOLSKI, L.

    2006-04-01

    The workshop's main objectives are (1) to present the emerging modalities for analyzing carbon in soil, (2) to assess their error propagation, (3) to recommend new protocols and sampling strategies for the new instrumentation, and, (4) to compare the costs of the new methods with traditional chemical ones.

  10. Good Experimental Methodologies and Simulation in Autonomous Mobile Robotics

    E-Print Network [OSTI]

    Amigoni, Francesco

    Good Experimental Methodologies and Simulation in Autonomous Mobile Robotics Francesco Amigoni and Viola Schiaffonati Artificial Intelligence and Robotics Laboratory, Dipartimento di Elettronica e to characterize analytically, as it is often the case in autonomous mobile robotics. Although their importance

  11. A Layered Architecture for Describing Information System Development Methodologies

    E-Print Network [OSTI]

    Han, Jun

    , modelling techniques, life cycle models, notations and tools. Some ISDMs target particular parts cycle. Some include project management and estimation techniques while others focus onlyA Layered Architecture for Describing Information System Development Methodologies P. M. Steele, J

  12. A Methodology for Estimating Interdomain Web Traffic Demand

    E-Print Network [OSTI]

    Maggs, Bruce M.

    A Methodology for Estimating Interdomain Web Traffic Demand Anja Feldmann , Nils Kammenhuber-varying) interdomain HTTP traffic demand matrix pairing several hundred thousand blocks of client IP addresses, Traffic demand, Interdomain, Es- timation 1. INTRODUCTION The reliable estimation and prediction

  13. Hydrogen Goal-Setting Methodologies Report to Congress

    Fuel Cell Technologies Publication and Product Library (EERE)

    DOE's Hydrogen Goal-Setting Methodologies Report to Congress summarizes the processes used to set Hydrogen Program goals and milestones. Published in August 2006, it fulfills the requirement under se

  14. Spent fuel management fee methodology and computer code user's manual.

    SciTech Connect (OSTI)

    Engel, R.L.; White, M.K.

    1982-01-01

    The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

  15. Robotic Airship Trajectory Tracking Control Using a Backstepping Methodology

    E-Print Network [OSTI]

    Papadopoulos, Evangelos

    Robotic Airship Trajectory Tracking Control Using a Backstepping Methodology Filoktimon Repoulias- loop trajectory tracking controller for an underactuated robotic airship having 6 degrees of freedom and the controller corrects the vehicle's trajectory successfully too. I. INTRODUCTION OBOTIC (autonomous) airships

  16. Protein MAS NMR methodology and structural analysis of protein assemblies

    E-Print Network [OSTI]

    Bayro, Marvin J

    2010-01-01

    Methodological developments and applications of solid-state magic-angle spinning nuclear magnetic resonance (MAS NMR) spectroscopy, with particular emphasis on the analysis of protein structure, are described in this thesis. ...

  17. The object as a vessel for vitality : a design methodology

    E-Print Network [OSTI]

    Schwarz, Allan David

    1988-01-01

    "To Build, form blocks, like a ladder into the sky, into the Earth,to bind the elements, Water and Fire". Like Wittgenstein's this is an attempt to define a personal methodology, which when documented and left behind might ...

  18. Design Methodology for a Very High Frequency Resonant Boost Converter

    E-Print Network [OSTI]

    Burkhart, Justin M.

    This paper introduces a design methodology for a resonant boost converter topology that is suitable for operation at very high frequencies. The topology we examine features a low parts count and fast transient response, ...

  19. Design methodology for a very high frequency resonant boost converter

    E-Print Network [OSTI]

    Burkhart, Justin M.

    This document introduces a design methodology for a resonant boost converter topology that is suitable for operation at very high frequencies. The topology we examine features a low parts count and fast transient response ...

  20. A Methodology for Evaluating Liquefaction Susceptibility in Shallow Sandy Slopes

    E-Print Network [OSTI]

    Buscarnera, Giuseppe

    This paper illustrates a modeling approach for evaluating the liquefaction susceptibility of shallow sandy slopes. The methodology is based on a theoretical framework for capturing undrained bifurcation in saturated granular ...

  1. A Quasi-Dynamic HVAC and Building Simulation Methodology 

    E-Print Network [OSTI]

    Davis, Clinton Paul

    2012-07-16

    This thesis introduces a quasi-dynamic building simulation methodology which complements existing building simulators by allowing transient models of HVAC (heating, ventilating and air-conditioning) systems to be created in an analogous way...

  2. A methodology to assess cost implications of automotive customization

    E-Print Network [OSTI]

    Fournier, Laëtitia

    2005-01-01

    This thesis focuses on determining the cost of customization for different components or groups of components of a car. It offers a methodology to estimate the manufacturing cost of a complex system such as a car. This ...

  3. COMPRO: A Methodological Approach for Business Process Contextualisation

    E-Print Network [OSTI]

    perspective for business process modelling. Business processes are strongly influenced by context, a methodological approach for business process contextualisation. Starting from an initial business process model. Keywords: business process modelling, context-awareness, business process contextualisation, correctness

  4. DOE 2009 Geothermal Risk Analysis: Methodology and Results (Presentation)

    SciTech Connect (OSTI)

    Young, K. R.; Augustine, C.; Anderson, A.

    2010-02-01

    This presentation summarizes the methodology and results for a probabilistic risk analysis of research, development, and demonstration work-primarily for enhanced geothermal systems (EGS)-sponsored by the U.S. Department of Energy Geothermal Technologies Program.

  5. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  6. Transmission Cost Allocation Methodologies for Regional Transmission Organizations

    SciTech Connect (OSTI)

    Fink, S.; Rogers, J.; Porter, K.

    2010-07-01

    This report describes transmission cost allocation methodologies for transmission projects developed to maintain or enhance reliability, to interconnect new generators, or to access new resources and enhance competitive bulk power markets, otherwise known as economic transmission projects.

  7. Numerical study of the effect of normalised window size, sampling frequency, and noise level on short time Fourier transform analysis

    SciTech Connect (OSTI)

    Ota, T. A.

    2013-10-15

    Photonic Doppler velocimetry, also known as heterodyne velocimetry, is a widely used optical technique that requires the analysis of frequency modulated signals. This paper describes an investigation into the errors of short time Fourier transform analysis. The number of variables requiring investigation was reduced by means of an equivalence principle. Error predictions, as the number of cycles, samples per cycle, noise level, and window type were varied, are presented. The results were found to be in good agreement with analytical models.

  8. Technical report on LWR design decision methodology. Phase I

    SciTech Connect (OSTI)

    None

    1980-03-01

    Energy Incorporated (EI) was selected by Sandia Laboratories to develop and test on LWR design decision methodology. Contract Number 42-4229 provided funding for Phase I of this work. This technical report on LWR design decision methodology documents the activities performed under that contract. Phase I was a short-term effort to thoroughly review the curret LWR design decision process to assure complete understanding of current practices and to establish a well defined interface for development of initial quantitative design guidelines.

  9. SU-E-T-374: Sensitivity of ArcCHECK to Tomotherapy Delivery Errors: Dependence On Analysis Technique

    SciTech Connect (OSTI)

    Templeton, A; Chu, J; Turian, J

    2014-06-01

    Purpose: ArcCHECK (Sun Nuclear) is a cylindrical diode array detector allowing three-dimensional sampling of dose, particularly useful in treatment delivery QA of helical tomotherapy. Gamma passing rate is a common method of analyzing results from diode arrays, but is less intuitive in 3D with complex measured dose distributions. This study explores the sensitivity of gamma passing rate to choice of analysis technique in the context of its ability to detect errors introduced into the treatment delivery. Methods: Nine treatment plans were altered to introduce errors in: couch speed, gantry/sonogram synchronization, and leaf open time. Each plan was then delivered to ArcCHECK in each of the following arrangements: “offset,” when the high dose area of the plan is delivered to the side of the phantom so that some diode measurements will be on the order of the prescription dose, and “centered,” when the high dose is in the center of the phantom where an ion chamber measurement may be acquired, but the diode measurements are in the mid to low-dose region at the periphery of the plan. Gamma analysis was performed at 3%/3mm tolerance and both global and local gamma criteria. The threshold of detectability for each error type was calculated as the magnitude at which the gamma passing rate drops below 90%. Results: Global gamma criteria reduced the sensitivity in the offset arrangement (from 2.3% to 4.5%, 8° to 21°, and 3ms to 8ms for couch-speed decrease, gantry-error, and leaf-opening increase, respectively). The centered arrangement detected changes at 3.3%, 5°, and 4ms with smaller variation. Conclusion: Each arrangement has advantages; offsetting allows more sampling of the higher dose region, while centering allows an ion chamber measurement and potentially better use of tools such as 3DVH, at the cost of positioning more of the diodes in the sometimes noisy mid-dose region.

  10. On the Fourier Transform Approach to Quantum Error Control

    E-Print Network [OSTI]

    Hari Dilip Kumar

    2012-08-24

    Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.

  11. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require (p) analysis time per MPI operation,more »forpprocesses. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  12. Comparison of Wind Power and Load Forecasting Error Distributions: Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.

    2012-07-01

    The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.

  13. Method and system for reducing errors in vehicle weighing systems

    DOE Patents [OSTI]

    Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)

    2010-08-24

    A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.

  14. Real-time quadrupole mass spectrometer analysis of gas in borehole fluid samples acquired using the U-Tube sampling methodology

    E-Print Network [OSTI]

    Freifeld, Barry M.; Trautz, Robert C.

    2006-01-01

    bar) Vacuum Pump Removed for Maintenance 06-Oct-04 6:00 06-the vacuum pump was taken off-line for maintenance, with

  15. Real-time quadrupole mass spectrometer analysis of gas in borehole fluid samples acquired using the U-Tube sampling methodology

    E-Print Network [OSTI]

    Freifeld, Barry M.; Trautz, Robert C.

    2006-01-01

    industries have long used tagged drilling fluids as anindicator of drilling fluid contamination [Withjack andNon-native fluids introduced by drilling and completion

  16. Real-time quadrupole mass spectrometer analysis of gas in borehole fluid samples acquired using the U-Tube sampling methodology

    E-Print Network [OSTI]

    Freifeld, Barry M.; Trautz, Robert C.

    2006-01-01

    drilling mud tracer in Beaufort exploration wells: Petroleum Society of Canadian Institute of Mining and Metallurgy Annual Meeting, 25th, Calgary, 1974, Paper

  17. Runtime Detection of C-Style Errors in UPC Code

    SciTech Connect (OSTI)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  18. On the efficiency of nondegenerate quantum error correction codes for Pauli channels

    E-Print Network [OSTI]

    Gunnar Bjork; Jonas Almlof; Isabel Sainz

    2009-05-19

    We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.

  19. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect (OSTI)

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  20. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.

  1. Homeowner Soil Sample Information Form 

    E-Print Network [OSTI]

    Provin, Tony

    2007-04-11

    THE TEXAS A&M UNIVERSITY SYSTEM Soil, Water and Forage Testing Laboratory Urban and Homeowner Soil Sample Information Form See sampling procedures and mailing instructions on the back of this form. (PLEASE DO NOT SEND CASH) SU07 E-444... (7-07) Results will be mailed to this address ONLY Address City Phone County where sampled Name Laboratory # (For Lab Use Only) State Zip Payment (DO NOT SEND CASH). Amount Paid $ SUBMITTED BY: Check Money Order Make Checks Payable to: Soil...

  2. Acceptance sampling using judgmental and randomly selected samples

    SciTech Connect (OSTI)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  3. Sample Residential Program Term Sheet

    Broader source: Energy.gov [DOE]

    A sample for defining and elaborating on the specifics of a clean energy loan program. Author: U.S. Department of Energy

  4. Adaptive Sampling for Environmental Robotics

    E-Print Network [OSTI]

    Mohammad Rahimi; Richard Pon; Deborah Estrin; William J. Kaiser; Mani Srivastava; Gaurav S. Sukhatme

    2003-01-01

    186, 2003. S. Thrun, “Robotics Mapping: A survey”, Exploringtechnique to environmental robotics applications includingSampling for Environmental Robotics Mohammad Rahimi †,‡‡ ,

  5. Type I error and power of the mean and covariance structure confirmatory factor analysis for differential item functioning detection: Methodological issues and resolutions

    E-Print Network [OSTI]

    Lee, Jaehoon

    2009-01-01

    or not a given level of measurement equivalence holds, different scaling methods can lead to different conclusions when a researcher locates DIF in a scale. This dissertation evaluates the MACS analysis for DIF detection by means of a Monte Carlo simulation...

  6. NSTP 2002-2 Methodology for Final Hazard Categorization for Nuclear...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodology for Final Hazard Categorization for Nuclear Facilities from Category 3 to Radiological (111302). NSTP 2002-2 Methodology for Final Hazard Categorization for...

  7. Modeling the Capacity and Emissions Impacts of Reduced Electricity Demand. Part 1. Methodology and Preliminary Results.

    E-Print Network [OSTI]

    Coughlin, Katie

    2013-01-01

    Impacts of Reduced Electricity Demand. Part 1. MethodologyImpacts of Reduced Electricity Demand. Part 1. MethodologyFigure 3: Commercial electricity demand with and without the

  8. Environmental surveillance master sampling schedule

    SciTech Connect (OSTI)

    Bisping, L.E.

    1995-02-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program.

  9. Hazard Sampling Dialog General Layout

    E-Print Network [OSTI]

    Zhang, Tao

    1 Hazard Sampling Dialog General Layout The dialog's purpose is to display information about the hazardous material being sampled by the UGV so either the system or the UV specialist can identify the risk level of the hazard. The dialog is associated with the hazmat reading icons (Table 1). Components

  10. Database Sampling with Functional Dependencies

    E-Print Network [OSTI]

    Riera, Jesús Bisbal

    Database Sampling with Functional Dependencies Jes´us Bisbal, Jane Grimson Department of Computer there is a need to prototype the database which the applications will use when in operation. A prototype database can be built by sampling data from an existing database. Including relevant semantic information when

  11. 200 area TEDF sample schedule

    SciTech Connect (OSTI)

    Brown, M.J.

    1995-03-22

    This document summarizes the sampling criteria associated with the 200 Area Treatment Effluent Facility (TEDF) that are needed to comply with the requirements of the Washington State Discharge Permit No. WA ST 4502 and good engineering practices at the generator streams that feed into TEDF. In addition, this document Identifies the responsible parties for both sampling and data transference.

  12. Sample push-out fixture

    DOE Patents [OSTI]

    Biernat, John L. (Scotia, NY)

    2002-11-05

    This invention generally relates to the remote removal of pelletized samples from cylindrical containment capsules. V-blocks are used to receive the samples and provide guidance to push out rods. Stainless steel liners fit into the v-channels on the v-blocks which permits them to be remotely removed and replaced or cleaned to prevent cross contamination between capsules and samples. A capsule holder securely holds the capsule while allowing manual up/down and in/out movement to align each sample hole with the v-blocks. Both end sections contain identical v-blocks; one that guides the drive out screw and rods or manual push out rods and the other to receive the samples as they are driven out of the capsule.

  13. Analysis of circuit imperfections in BosonSampling

    E-Print Network [OSTI]

    Anthony Leverrier; Raúl García-Patrón

    2014-11-05

    BosonSampling is a problem where a quantum computer offers a provable speedup over classical computers. Its main feature is that it can be solved with current linear optics technology, without the need for a full quantum computer. In this work, we investigate whether an experimentally realistic BosonSampler can really solve BosonSampling without any fault-tolerance mechanism. More precisely, we study how the unavoidable errors linked to an imperfect calibration of the optical elements affect the final result of the computation. We show that the fidelity of each optical element must be at least $1 - O(1/n^2)$, where $n$ refers to the number of single photons in the scheme. Such a requirement seems to be achievable with state-of-the-art equipment.

  14. A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM

    E-Print Network [OSTI]

    Nochetto, Ricardo H.

    A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM casting problem is a convection­dominated nonlinearly degenerate diffusion problem. It is discretized adaptive method. Keywords. a posteriori error estimates, continuous casting, method of characteristics

  15. Simulations of error in quantum adiabatic computations of random 2-SAT instances

    E-Print Network [OSTI]

    Gill, Jay S. (Jay Singh)

    2006-01-01

    This thesis presents a series of simulations of quantum computations using the adiabatic algorithm. The goal is to explore the effect of error, using a perturbative approach that models 1-local errors to the Hamiltonian ...

  16. Design techniques for graph-based error-correcting codes and their applications 

    E-Print Network [OSTI]

    Lan, Ching Fu

    2006-04-12

    -correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...

  17. Shared dosimetry error in epidemiological dose-response analyses

    SciTech Connect (OSTI)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.

  18. Shared dosimetry error in epidemiological dose-response analyses

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less

  19. Sampling Report for August 15, 2014 WIPP Samples

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and pulley system was constructed to move a camera for documentation and close-up pictures. The sampling device is located at the end of the boom. (Note, this picture is from...

  20. Error-field penetration in reversed magnetic shear configurations

    SciTech Connect (OSTI)

    Wang, H. H.; Wang, Z. X.; Wang, X. Q. [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China)] [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, X. G. [School of Physics, Peking University, Beijing 100871 (China)] [School of Physics, Peking University, Beijing 100871 (China)

    2013-06-15

    Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.

  1. Error 401 on upload? | OpenEI Community

    Open Energy Info (EERE)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on QA:QA J-E-1 SECTION J APPENDIX ECoopButtePowerEdisto Electric Coop,Erosion Flume Jump to: navigation, search BasicError

  2. Analysis Of The Tank 5F Final Characterization Samples-2011

    SciTech Connect (OSTI)

    Oji, L. N.; Diprete, D.; Coleman, C. J.; Hay, M. S.

    2012-09-27

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the following: Al-26, Sn-126, Sb-126, Sb-126m, Eu-152 and Cf-249. SRNL, in conjunction with the plant customer, reviewed all these cases and determined that the impacts were negligible.

  3. ANALYSIS OF THE TANK 5F FINAL CHARATERIZATION SAMPLES-2011

    SciTech Connect (OSTI)

    Oji, L.; Diprete, D.; Coleman, C.; Hay, M.

    2012-01-20

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the following: Al-26, Sn-126, Sb-126, Sb-126m, Eu-152 and Cf-249. SRNL, in conjunction with the plant customer, reviewed all these cases and determined that the impacts were negligible.

  4. ANALYSIS OF THE TANK 5F FINAL CHARACTERIZATION SAMPLES-2011

    SciTech Connect (OSTI)

    Oji, L.; Diprete, D.; Coleman, C.; Hay, M.

    2012-08-03

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the following: Al-26, Sn-126, Sb-126, Sb-126m, Eu-152 and Cf-249. SRNL, in conjunction with the plant customer, reviewed all these cases and determined that the impacts were negligible.

  5. An Integrated Safety Assessment Methodology for Generation IV Nuclear Systems

    SciTech Connect (OSTI)

    Timothy J. Leahy

    2010-06-01

    The Generation IV International Forum (GIF) Risk and Safety Working Group (RSWG) was created to develop an effective approach for the safety of Generation IV advanced nuclear energy systems. Early work of the RSWG focused on defining a safety philosophy founded on lessons learned from current and prior generations of nuclear technologies, and on identifying technology characteristics that may help achieve Generation IV safety goals. More recent RSWG work has focused on the definition of an integrated safety assessment methodology for evaluating the safety of Generation IV systems. The methodology, tentatively called ISAM, is an integrated “toolkit” consisting of analytical techniques that are available and matched to appropriate stages of Generation IV system concept development. The integrated methodology is intended to yield safety-related insights that help actively drive the evolving design throughout the technology development cycle, potentially resulting in enhanced safety, reduced costs, and shortened development time.

  6. Computing the partition function, ensemble averages, and density of states for lattice spin systems by sampling the mean

    SciTech Connect (OSTI)

    Gillespie, Dirk

    2013-10-01

    An algorithm to approximately calculate the partition function (and subsequently ensemble averages) and density of states of lattice spin systems through non-Monte-Carlo random sampling is developed. This algorithm (called the sampling-the-mean algorithm) can be applied to models where the up or down spins at lattice nodes interact to change the spin states of other lattice nodes, especially non-Ising-like models with long-range interactions such as the biological model considered here. Because it is based on the Central Limit Theorem of probability, the sampling-the-mean algorithm also gives estimates of the error in the partition function, ensemble averages, and density of states. Easily implemented parallelization strategies and error minimizing sampling strategies are discussed. The sampling-the-mean method works especially well for relatively small systems, systems with a density of energy states that contains sharp spikes or oscillations, or systems with little a priori knowledge of the density of states.

  7. Improved Characterization of Transmitted Wavefront Error on CADB Epoxy-Free Bonded Solid State Laser Materials

    SciTech Connect (OSTI)

    Bayramian, A

    2010-12-09

    Current state-of-the-art and next generation laser systems - such as those used in the NIF and LIFE experiments at LLNL - depend on ever larger optical elements. The need for wide aperture optics that are tolerant of high power has placed many demands on material growers for such diverse materials as crystalline sapphire, quartz, and laser host materials. For such materials, it is either prohibitively expensive or even physically impossible to fabricate monolithic pieces with the required size. In these cases, it is preferable to optically bond two or more elements together with a technique such as Chemically Activated Direct Bonding (CADB{copyright}). CADB is an epoxy-free bonding method that produces bulk-strength bonded samples with negligible optical loss and excellent environmental robustness. The authors have demonstrated CADB for a variety of different laser glasses and crystals. For this project, they will bond quartz samples together to determine the suitability of the resulting assemblies for large aperture high power laser optics. The assemblies will be evaluated in terms of their transmitted wavefront error, and other optical properties.

  8. Plasma parameter scaling of the error-field penetration threshold in tokamaks Richard Fitzpatrick

    E-Print Network [OSTI]

    Fitzpatrick, Richard

    Plasma parameter scaling of the error-field penetration threshold in tokamaks Richard Fitzpatrick of a rotating tokamak plasma to a resonant error-field Phys. Plasmas 21, 092513 (2014); 10.1063/1.4896244 A nonideal error-field response model for strongly shaped tokamak plasmas Phys. Plasmas 17, 112502 (2010); 10

  9. Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS

    E-Print Network [OSTI]

    Duckham, Matt

    Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS Matt Duckham Department in the handling of uncertainty within GIS, the production of what has been described as an error sensitive GIS of opportunities, but also impediments to the implemen- tation of such an error sensitive GIS. An important barrier

  10. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    E-Print Network [OSTI]

    Julia Cramer; Norbert Kalb; M. Adriaan Rol; Bas Hensen; Machiel S. Blok; Matthew Markham; Daniel J. Twitchen; Ronald Hanson; Tim H. Taminiau

    2015-08-06

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits, so that errors can be detected without affecting the encoded state. To be compatible with universal fault-tolerant computations, it is essential that the states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected qubit using a diamond quantum processor. We encode a logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements using an ancilla electron spin, and apply corrections on the encoded state by real-time feedback. The actively error-corrected qubit is robust against errors and multiple rounds of error correction prevent errors from accumulating. Moreover, by correcting phase errors naturally induced by the environment, we demonstrate that encoded quantum superposition states are preserved beyond the dephasing time of the best physical qubit used in the encoding. These results establish a powerful platform for the fundamental investigation of error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  11. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    E-Print Network [OSTI]

    Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning Robert Granat, Kiri-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors for detecting and recovering from such errors. A common hardware technique for achieving radiation protection

  12. Edit: Study -APP Save | Exit | Hide/Show Errors | Print... | Jump To

    E-Print Network [OSTI]

    Biederman, Irving

    Edit: Study - APP Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project Guidance Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project IdentificationStarDev/ResourceAdministration/Project/ProjectEditor?Project=com... 1 #12;Edit: Study - APP- Save | Exit | Hide/Show Errors | Print... | Jump To: 02. Study

  13. Error Correction on a Tree: An Instanton Approach V. Chernyak,1

    E-Print Network [OSTI]

    Stepanov, Misha

    or semianalytical estimating of the post-error correction bit error rate (BER) when a forward-error correction 630090, Russia 5 Department of Electrical Engineering, University of Arizona, Tucson, Arizona 85721, USA is utilized for transmitting information through a noisy channel. The generic method that applies to a variety

  14. Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences

    E-Print Network [OSTI]

    Dominici, Francesca

    1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure of air pollution and health. Because measurement error may have substantial implications for interpreting

  15. Validation of error estimators and superconvergence by a computer-based approach 

    E-Print Network [OSTI]

    Upadhyay, Chandra Shekhar

    1993-01-01

    In this work a computer-based methodology for studying the asymptotic properties of the finite element solution in the interior of grids of triangles is presented. This methodology is applied to numerically analyze the ...

  16. New insights on numerical error in symplectic integration

    E-Print Network [OSTI]

    Hugo Jiménez-Pérez; Jean-Pierre Vilotte; Barbara Romanowicz

    2015-08-13

    We implement and investigate the numerical properties of a new family of integrators connecting both variants of the symplectic Euler schemes, and including an alternative to the classical symplectic mid-point scheme, with some additional terms. This family is derived from a new method, introduced in a previous study, for generating symplectic integrators based on the concept of special symplectic manifold. The use of symplectic rotations and a particular type of projection keeps the whole procedure within the symplectic framework. We show that it is possible to define a set of parameters that control the additional terms providing a way of "tuning" these new symplectic schemes. We test the "tuned" symplectic integrators with the perturbed pendulum and we compare its behavior with an explicit scheme for perturbed systems. Remarkably, for the given examples, the error in the energy integral can be reduced considerably. There is a natural geometrical explanation, sketched at the end of this paper. This is the subject of a parallel article where a finer analysis is performed. Numerical results obtained in this paper open a new point of view on symplectic integrators and Hamiltonian error.

  17. Aperiodic dynamical decoupling sequences in presence of pulse errors

    E-Print Network [OSTI]

    Zhi-Hui Wang; V. V. Dobrovitski

    2011-01-12

    Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstrate that by applying the control pulses along different directions, the performance of QDD protocols can be noticeably improved, and explain the reason of such an improvement. Our results can be useful for future implementations of the aperiodic decoupling protocols, and for better understanding of the impact of errors on quantum control of spins.

  18. Depth-discrete sampling port

    DOE Patents [OSTI]

    Pemberton, Bradley E. (Aiken, SC); May, Christopher P. (Columbia, MD); Rossabi, Joseph (Aiken, SC); Riha, Brian D. (Augusta, GA); Nichols, Ralph L. (North Augusta, SC)

    1998-07-07

    A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.

  19. APractical Methodology for the Formal Verification of RISC Processors

    E-Print Network [OSTI]

    Tahar, Sofiène

    and nuclear reactor control etc., where design errors could lead to loss of life and expensive property [79, pipeline data and pipeline control correctness. In the former, we show that the cumulative effect tasks. In addition, the pipeline control proof is constructive, in the sense that the conditions under

  20. Recent developments in atomic/nuclear methodologies used for the study of cultural heritage objects

    SciTech Connect (OSTI)

    Appoloni, Carlos Roberto

    2013-05-06

    Archaeometry is an area established in the international community since the 60s, with extensive use of atomic-nuclear methods in the characterization of art, archaeological and cultural heritage objects in general. In Brazil, however, until the early '90s, employing methods of physics, only the area of archaeological dating was implemented. It was only after this period that Brazilian groups became involved in the characterization of archaeological and art objects with these methodologies. The Laboratory of Applied Nuclear Physics, State University of Londrina (LFNA/UEL) introduced, pioneered in 1994, Archaeometry and related issues among its priority lines of research, after a member of LFNA has been involved in 1992 with the possibilities of tomography in archaeometry, as well as the analysis of ancient bronzes by EDXRF. Since then, LFNA has been working with PXRF and Portable Raman in several museums in Brazil, in field studies of cave paintings and in the laboratory with material sent by archaeologists, as well as carrying out collaborative work with new groups that followed in this area. From 2003/2004 LAMFI/DFN/IFUSP and LIN/COPPE/UFRJ began to engage in the area, respectively with methodologies using ion beams and PXRF, then over time incorporating other techniques, followed later by other groups. Due to the growing number of laboratories and institutions/archaeologists/conservators interested in these applications, in may 2012 was created a network of available laboratories, based at http://www.dfn.if.usp.br/lapac. It will be presented a panel of recent developments and applications of these methodologies by national groups, as well as a sampling of what has been done by leading groups abroad.

  1. METHODOLOGY Open Access Modular assembly of designer PUF proteins for

    E-Print Network [OSTI]

    Zhao, Huimin

    METHODOLOGY Open Access Modular assembly of designer PUF proteins for specific post of the PUF domain assembly method for RBP engineering, we fused the PUF domain to a post and applied biology and medicine. Keywords: Protein engineering, RNA-binding protein, Post

  2. Architecture Rationalization: A Methodology for Architecture Verifiability, Traceability and Completeness

    E-Print Network [OSTI]

    Han, Jun

    Architecture Rationalization: A Methodology for Architecture Verifiability, Traceability and Completeness Antony Tang Jun Han Faculty of ICT, Swinburne University of Technology, Melbourne, Australia E-mail: {atang, jhan}@it.swin.edu.au Abstract Architecture modeling is practiced extensively in the software

  3. A Design Methodology and Environment for Interactive Behavioral Synthesis

    E-Print Network [OSTI]

    California at Irvine, University of

    ­ tant area of research and company interest. However, there has been market resistance to the automatic designer fine­grain control over synthesis tasks, and continually supplies feedback in the form of quality of the proposed design methodology and to demonstrate its power and flexibility, we also present the Interactive

  4. Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric

    E-Print Network [OSTI]

    Riabov, Vladimir V.

    The purpose of this document is to describe the structured testing methodology for software testing, also uses the control flow structure of software to establish path cover- age criteria. The resultant testCabe, object oriented, software development, software diagnostic, software metrics, software testing

  5. A three step methodology Experimental results and discussion

    E-Print Network [OSTI]

    Lefèvre, Laurent

    initiatives Multi-configuration (processor, NIC), more energy efficient (GPU), and low power hardware (DDR3 Energy Reduction in Cloud and HPC: Design and Implementation of a Blind Methodology Ghislain Landry Conclusions and Perspectives HPC systems and energy consumption Sheet1 100MW + 65MW 20MW Projections

  6. Sensitivity Analysis Methodology for a Complex System Computational Model

    E-Print Network [OSTI]

    1 Sensitivity Analysis Methodology for a Complex System Computational Model James J. Filliben of computational models to serve as predictive surrogates for the system. The use of such models increasingly) of a computational model for a complex system is always an essential component in accepting/rejecting such a model

  7. C-1 2003 SITE ENVIRONMENTAL REPORT Radiological Data Methodologies

    E-Print Network [OSTI]

    Homes, Christopher C.

    C-1 2003 SITE ENVIRONMENTAL REPORT APPENDIX C Radiological Data Methodologies DOSE CALCULATION to calculate annual disper- sions for the midpoint of a given sector and distance. Facility Protection Agency Exposure Factors Handbook (EPA 1996). RADIOLOGICAL DATA PROCESSING Radiation events occur

  8. User-Friendly Methodology for Automatic Exploration of Compiler Options

    E-Print Network [OSTI]

    Gao, Guang R.

    User-Friendly Methodology for Automatic Exploration of Compiler Options: A Case Study on the Intel, lochen, jcuvillo, ggao}@capsl.udel.edu Abstract Finding an optimized combination of compiler options with a deep understanding of the application at hand and a fairly good knowledge of the compiler features

  9. A METHODOLOGY FOR IDENTIFICATION OF NARMAX MODELS APPLIED TO DIESEL

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    A METHODOLOGY FOR IDENTIFICATION OF NARMAX MODELS APPLIED TO DIESEL ENGINES 1 Gianluca Zito ,2 Ioan is illustrated by means of an automotive case study, namely a variable geometry turbocharged diesel engine identification procedure is illustrated. In section 3 a diesel engine system, used to test the procedure

  10. Does help help? Introducing the Bayesian Evaluation and Assessment methodology

    E-Print Network [OSTI]

    Mostow, Jack

    Does help help? Introducing the Bayesian Evaluation and Assessment methodology Joseph E. Beck1 the effectiveness of tutor help is an important evaluation of an ITS. Does help help? Does one type of help work of the difficulties is that the question "does help help?" is ill-defined; what does it mean to help students? Does

  11. Operational Risk Management: Added Value of Advanced Methodologies

    E-Print Network [OSTI]

    Maume-Deschamps, Véronique

    Operational Risk Management: Added Value of Advanced Methodologies Paris, September 2013 Bertrand HASSANI Head of Major Risks Management & Scenario Analysis Santander UK Disclaimer: The opinions, ideas Measurement Key statements 1. Risk management moto: Si Vis Pacem Para Belum 1. Awareness 2. Prevention 3

  12. Customizing AOSE Methodologies by Reusing AOSE Thomas Juan Leon Sterling

    E-Print Network [OSTI]

    Mascardi, Viviana

    Customizing AOSE Methodologies by Reusing AOSE Features Thomas Juan Leon Sterling Department of Computer Science and Software Engineering, The University of Melbourne 221 Bouverie Street Carlton engineering support for a diverse range of software quality attributes, such as privacy and openness

  13. Algorithms and Experiments: The New (and Old) Methodology

    E-Print Network [OSTI]

    Moret, Bernard

    Algorithms and Experiments: The New (and Old) Methodology Bernard M.E. Moret Department of Computer twenty years have seen enormous progress in the design of algorithms, but little of it has been put into practice. Because many recently developed algorithms are hard to characterize theoretically and have large

  14. Methodology Modelling: Combining Software Processes with Software Products \\Lambda

    E-Print Network [OSTI]

    Han, Jun

    of software processes in improving the quality of software products has been widely recognised for some time processes and software products is a major factor in improving soft­ ware quality. 2. Fine­grained, nonMethodology Modelling: Combining Software Processes with Software Products \\Lambda Jun Han and Jim

  15. A Methodology for Decisionmaking in Project Evaluation in Land

    E-Print Network [OSTI]

    A Methodology for Decisionmaking in Project Evaluation in Land Management Planning A. Weintraub Abstract: In order to evaluate alternative plans, wildland management planners must consider many the evaluation of plans under alternative value systems. Alternatives are compared through a small set of values

  16. ARM Processes and Their Modeling and Forecasting Methodology Benjamin Melamed

    E-Print Network [OSTI]

    Chapter 73 ARM Processes and Their Modeling and Forecasting Methodology Benjamin Melamed Abstract The class of ARM (Autoregressive Modular) processes is a class of stochastic processes, defined by a non- linear autoregressive scheme with modulo-1 reduction and additional transformations. ARM processes

  17. Methodology for testing metal detectors using variables test data

    SciTech Connect (OSTI)

    Spencer, D.D.; Murray, D.W.

    1993-08-01

    By extracting and analyzing measurement (variables) data from portal metal detectors whenever possible instead of the more typical ``alarm``/``no-alarm`` (attributes or binomial) data, we can be more informed about metal detector health with fewer tests. This testing methodology discussed in this report is an alternative to the typical binomial testing and in many ways is far superior.

  18. LASS License Aware Service Selection: Methodology and Framework

    E-Print Network [OSTI]

    Dustdar, Schahram

    LASS ­ License Aware Service Selection: Methodology and Framework G.R. Gangadharan1 , Marco Comerio services with corre- sponding service licenses which consumers should follow. Often, service consumers are interested in selecting a service based on certain licensing clauses. For a set of requested licensing

  19. CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN 8.1 Introduction

    E-Print Network [OSTI]

    O'Connor, Rory

    of the most common distinctions is between quantitative and qualitative research methods. . Quantitative. The motivation for pursuing qualitative research, as opposed to quantitative research, comes from the observation in this thesis. 8.2 Research Methodologies Research methods can be classified in various ways, however, one

  20. CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN 8.1 Introduction

    E-Print Network [OSTI]

    O'Connor, Rory

    of the most common distinctions is between quantitative and qualitative research methods. · Quantitative. The motivation for pursuing qualitative research, as opposed to quantitative research, comes from the observation in this thesis. 8.2 Research Methodologies Research methods can be classified in various ways, however, one

  1. Methodology for extracting local constants from petroleum cracking flows

    DOE Patents [OSTI]

    Chang, Shen-Lin (Woodridge, IL); Lottes, Steven A. (Naperville, IL); Zhou, Chenn Q. (Munster, IN)

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  2. SARC Power Estimation Methodology Daniele Ludovici and Georgi N. Gaydadjiev

    E-Print Network [OSTI]

    . Accurate estimation of power dis- sipation is very important during micro-architectural de- sign of every power consumption of the SARC architecture. SARC project is targeting next generation scalable com is one of the main challenges of this project. Therefore, adequate methodology to estimate it is needed

  3. DEVELOPMENT OF A METHODOLOGY TO PREDICT AND PREVENT LEAKS CAUSED

    E-Print Network [OSTI]

    Beckermann, Christoph

    to produce steel castings that are free from macroporosity (i.e., shrinkage porosity large enoughDEVELOPMENT OF A METHODOLOGY TO PREDICT AND PREVENT LEAKS CAUSED BY MICROPOROSITY IN STEEL CASTINGS to be detectable by radiographic testing). No risering rules currently exist to produce castings free from

  4. Public Administration Research and Practice: A Methodological Manifesto1

    E-Print Network [OSTI]

    Gill, Jeff

    Public Administration Research and Practice: A Methodological Manifesto1 Jeff Gill California Service, Texas A&M University, December 3-4, 1999. #12;1 1 Introduction Public Administration, in Dwight that public administration has ignored its technical side and that given the types of problems dealt

  5. ORIGINAL PAPER Review of Methodologies for Offshore Wind Resource

    E-Print Network [OSTI]

    Pryor, Sara C.

    ORIGINAL PAPER Review of Methodologies for Offshore Wind Resource Assessment in European Seas A. M installation, operation and maintenance costs associated with offshore wind parks. Successful offshore wind. Keywords Wind energy Á Offshore Á Resources assessment Á European seas Á Wind mapping Á Wind climatology Á

  6. Regional issue identification and assessment: study methodology. First annual report

    SciTech Connect (OSTI)

    Not Available

    1980-01-01

    The overall assessment methodologies and models utilized for the first project under the Regional Issue Identification and Assessment (RIIA) program are described. Detailed descriptions are given of the methodologies used by lead laboratories for the quantification of the impacts of an energy scenario on one or more media (e.g., air, water, land, human and ecology), and by all laboratories to assess the regional impacts on all media. The research and assessments reflected in this document were performed by the following national laboratories: Argonne National Laboratory; Brookhaven National Laboratory; Lawrence Berkeley Laboratory; Los Alamos Scientific Laboratory; Oak Ridge National Laboratory; and Pacific Northwest Laboratory. This report contains five chapters. Chapter 1 briefly describes the overall study methodology and introduces the technical participants. Chapter 2 is a summary of the energy policy scenario selected for the RIIA I study and Chapter 3 describes how this scenario was translated into a county-level siting pattern of energy development. The fourth chapter is a detailed description of the individual methodologies used to quantify the environmental and socioeconomic impacts of the scenario while Chapter 5 describes how these impacts were translated into comprehensive regional assessments for each Federal Region.

  7. Shielding Methodologies in the Presence of Power/Ground Noise

    E-Print Network [OSTI]

    Friedman, Eby G.

    Shielding Methodologies in the Presence of Power/Ground Noise Selc¸uk K¨ose, Emre Salman, and Eby G, 14627 {kose,salman,friedman}@ece.rochester.edu Abstract-- Design guidelines for shielding of shield lengths and widths, a shield line can degrade signal integrity by increasing the crosstalk noise

  8. CRRELREPORT98-4 Frost-Shielding Methodology and

    E-Print Network [OSTI]

    Horvath, John S.

    CRRELREPORT98-4 Frost-Shielding Methodology and Demonstration for Shallow Burial of Water and Sewer freezing by add- ing an insulation shield would allow a shallow burial option. This can reduce excavation shields for a water line in northern New Hampshire through a 4-year Construction Productivity Advancement

  9. IEEE 802.11AND BLUETOOTH COEXISTENCE ANALYSIS METHODOLOGY'

    E-Print Network [OSTI]

    Howitt, Ivan

    wireless personal area networks and IEEE 802.11 wireless local area networks share the same 2.4 GHz UL band an importantissue. Both BT wireless personal area networks (WPANs) [1, 21 and IEEE 802.11 wireless local area standards committee [4, 51. In this paper, a more general analytical approach is presented. A methodology

  10. Enhancing Tropos with Commitments A Business Metamodel and Methodology

    E-Print Network [OSTI]

    for business modeling are either high-level and semiformal or formal but low-level. Thus they fail to support flexible but rigorous modeling and enactment of business processes. This paper begins from the well and a methodology for specifying a business model. This paper includes an insurance industry case study that several

  11. Using Tropos Methodology to Model an Integrated Health Assessment System

    E-Print Network [OSTI]

    health assessment of health and social care needs of older people is used as the case study throughoutUsing Tropos Methodology to Model an Integrated Health Assessment System Haralambos Mouratidis 1 of Trento, Italy pgiorgini@dit.unit.it Abstract. This paper presents a case study to illustrate the features

  12. An Alternative Baseline Methodology for the Power Sector

    E-Print Network [OSTI]

    An Alternative Baseline Methodology for the Power Sector - Taking a Systemic Approach Jakob Asger in August 2005 to discuss the international future strategy of climate policies. Both events put our work process from idea to final thesis. Further we would like to express our warm thanks to Senior Energy

  13. METHODOLOGY ARTICLE Open Access A simple and reproducible breast cancer

    E-Print Network [OSTI]

    Geman, Donald

    METHODOLOGY ARTICLE Open Access A simple and reproducible breast cancer prognostic test Luigi test for breast cancer based on a 70-gene expression signature. We provide all the software, Personalized medicine, Breast cancer, MammaPrint Background Currently, a number of molecular-based prognostic

  14. POLE PLACEMENT VIA OUTPUT FEEDBACK: A METHODOLOGY BASED ON

    E-Print Network [OSTI]

    Orsi, Robert

    POLE PLACEMENT VIA OUTPUT FEEDBACK: A METHODOLOGY BASED ON PROJECTIONS Kaiyang Yang and Robert Orsi feedback pole placement problems of the following rather general form: given n subsets of the complex plane, find a static output feedback that places in each of these subsets a pole of the closed loop system

  15. Fair and Comprehensive Methodology for Comparing Hardware Performance of Fourteen

    E-Print Network [OSTI]

    Gaj, Krzysztof

    Fair and Comprehensive Methodology for Comparing Hardware Performance of Fourteen Round Two SHA-3 to make it fair, transparent, practical, and ac- ceptable for the majority of the cryptographic community it to the comparison of hardware performance of 14 Round 2 SHA-3 candidates. The most important aspects of our

  16. METHODOLOGY Open Access TRIzol treatment of secretory phase endometrium

    E-Print Network [OSTI]

    METHODOLOGY Open Access TRIzol treatment of secretory phase endometrium allows combined proteomic/proteins. Conclusion: TRIzol treatment of secretory phase EM allows combined proteomic and mRNA microarray analysis abnormal bleeding; infertility and chronic fatigue. Well established biological differences between eutopic

  17. Underestimating Costs in Public Works Projects: Error or Lie?

    E-Print Network [OSTI]

    Flyvbjerg, Bent; Holm, Mette Skamris; Buhl, Søren

    2006-01-01

    highways, freeways, high-speed rail, urban rail, andbridges, tunnels, high-speed rail, urban rail, and conven-sample indicate that high-speed rail tops the list of cost

  18. ENVIRONMENTAL SAMPLING AND ANALYSIS - GETTING IT RIGHT

    SciTech Connect (OSTI)

    CONNELL CW

    2008-01-22

    The Department of Energy's Hanford Site in southeastern Washington State was established in the 1940s as part of the Manhattan Project. Hanford's role was to produce weapons-grade nuclear material for defense, and by 1989, when the Site's mission changed from operations to cleanup, Hanford had produced more than 60 percent of the nation's plutonium. The legacy of Hanford's production years is enormous in terms of nuclear and hazardous waste, especially the 270 billion gallons of contaminated groundwater and the 5 million cubic yards of contaminated soil. Managing the contaminated soil and groundwater are particularly important because the Columbia River, the lifeblood of the northwest and the nation's eighth largest river, bounds the Site. Fluor Hanford's Soil & Groundwater Remediation Project (S&GRP) integrates all of the activities that deal with remediating and monitoring the groundwater across the Site. The S&GRP uses a detailed series of steps to record, track, and verify information. The Sample and Data Management (SDM) Process consists of 10 integrated steps that start with the data quality objectives process that establishes the mechanism for collecting the right information with the right people. The process ends with data quality assessment, which is used to ensure that all quantitative data (e.g., field screening, fixed laboratory) are the right type, and of adequate quality to support the decision-making process. Steps 3 through 10 of the process are production steps and are integrated electronically. The detailed plans, procedures, and systems used day-to-day by the SDM process require a high degree of accuracy and reliability. Tools must be incorporated into the processes that minimize errors. This paper discusses all of the elements of the SDM process in detail.

  19. Asymptotic Efficiency and Finite Sample Performance of Frequentist Quantum State Estimation

    E-Print Network [OSTI]

    Raj Chakrabarti; Anisha Ghosh

    2011-11-15

    We undertake a detailed study of the performance of maximum likelihood (ML) estimators of the density matrix of finite-dimensional quantum systems, in order to interrogate generic properties of frequentist quantum state estimation. Existing literature on frequentist quantum estimation has not rigorously examined the finite sample performance of the estimators and associated methods of hypothesis testing. While ML is usually preferred on the basis of its asymptotic properties - it achieves the Cramer-Rao (CR) lower bound - the finite sample properties are often less than optimal. We compare the asymptotic and finite-sample properties of the ML estimators and test statistics for two different choices of measurement bases: the average case optimal or mutually unbiased bases (MUB) and a representative set of suboptimal bases, for spin-1/2 and spin-1 systems. We show that, in both cases, the standard errors of the ML estimators sometimes do not contain the true value of the parameter, which can render inference based on the asymptotic properties of the ML unreliable for experimentally realistic sample sizes. The results indicate that in order to fully exploit the information geometry of quantum states and achieve smaller reconstruction errors, the use of Bayesian state reconstruction methods - which, unlike frequentist methods, do not rely on asymptotic properties - is desirable, since the estimation error is typically lower due to the incorporation of prior knowledge.

  20. Pollution error in the h-version of the finite-element method and the local quality of a-posteriori error estimators 

    E-Print Network [OSTI]

    Mathur, Anuj

    1994-01-01

    In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...

  1. Errors, 3rd printing Page 3, Fig 1.2 has an error in the stratigraphic key: "Tertiary" should be "Triassic".

    E-Print Network [OSTI]

    Fossen, Haakon

    Errors, 3rd printing ·Page 3, Fig 1.2 has an error in the stratigraphic key: "Tertiary" should "-amplitude" to "-wavelength". ·Page 231, 6th and 3rd last lines of the page: Add "Figure" in front of 19.5a ..." and 3rd line: "three principal axes" (not two). #12;

  2. A flexible importance sampling method for integrating subgrid processes

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Raut, E. K.; Larson, V. E.

    2015-10-22

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is integration. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains bothmore »precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  3. To the low-temperature technologies methodology: the clean superconductor free energy fluctuations calculation in the micro- and macrostructures descriptions of superconductor

    E-Print Network [OSTI]

    Iogann Tolbatov

    2009-10-24

    The Ginzburg - Landau theory is used for the superconducting structures free energy fluctuations study. On its basis, we have defined the value of the heat capacity jump in the macroscopic zero-dimensional sample and in the zero-dimensional microstructures ensemble of the total volume equal to the macroscopic sample volume. The inference is made that in the Ginzburg - Landau methodology frameworks, it is essential to take into account the superconducting clean sample effective dimensionality only on the last stage of its thermodynamical characteristics calculation.

  4. Coordinated joint motion control system with position error correction

    DOE Patents [OSTI]

    Danko, George (Reno, NV)

    2011-11-22

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  5. Error field penetration and locking to the backward propagating wave

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore »pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  6. Sample Results from Routine Salt Batch 7 Samples

    SciTech Connect (OSTI)

    Peters, T.

    2015-05-13

    Strip Effluent Hold Tank (SEHT) and Decontaminated Salt Solution Hold Tank (DSSHT) samples from several of the “microbatches” of Integrated Salt Disposition Project (ISDP) Salt Batch (“Macrobatch”) 7B have been analyzed for 238Pu, 90Sr, 137Cs, Inductively Coupled Plasma Emission Spectroscopy (ICPES), and Ion Chromatography Anions (IC-A). The results from the current microbatch samples are similar to those from earlier samples from this and previous macrobatches. The Actinide Removal Process (ARP) and the Modular Caustic-Side Solvent Extraction Unit (MCU) continue to show more than adequate Pu and Sr removal, and there is a distinct positive trend in Cs removal, due to the use of the Next Generation Solvent (NGS). The Savannah River National Laboratory (SRNL) notes that historically, most measured Concentration Factor (CF) values during salt processing have been in the 12-14 range. However, recent processing gives CF values closer to 11. This observation does not indicate that the solvent performance is suffering, as the Decontamination Factor (DF) has still maintained consistently high values. Nevertheless, SRNL will continue to monitor for indications of process upsets. The bulk chemistry of the DSSHT and SEHT samples do not show any signs of unusual behavior.

  7. Enhancing the Benefit of the Chemical Mixture Methodology: A Report on Methodology Testing and Potential Approaches for Improving Performance

    SciTech Connect (OSTI)

    Yu, Xiao-Ying; Yao, Juan; He, Hua; Glantz, Clifford S.; Booth, Alexander E.

    2012-01-01

    Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.

  8. Inertial impaction air sampling device

    DOE Patents [OSTI]

    Dewhurst, K.H.

    1987-12-10

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  9. Inertial impaction air sampling device

    DOE Patents [OSTI]

    Dewhurst, K.H.

    1990-05-22

    An inertial impactor is designed which is to be used in an air sampling device for collection of respirable size particles in ambient air. The device may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  10. Communication Engineering Systems Sampling Theorem &

    E-Print Network [OSTI]

    Kovintavewat, Piya

    x nT nx continuous sample quantized sample binary stream x t x t 2 D 7 D 8 D ( ) 7L MIDRISE s T 3 DPCM (1-bit quantizer) 1 (unit delay) 17 1n n nv x x , 0 sgn , 0 n n n n v v v v 1n n nx v x 2 DM #12;.. DM 18 1n n nx x v nx 1 1 sgn sgn n n n i ii i x v v #12;.. 19 1 b s b R mf T #12

  11. The Ocean Sampling Day Consortium

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kopf, Anna; Bicak, Mesude; Kottmann, Renzo; Schnetzer, Julia; Kostadinov, Ivaylo; Lehmann, Katja; Fernandez-Guerra, Antonio; Jeanthon, Christian; Rahav, Eyal; Ullrich, Matthias; et al

    2015-06-19

    In this study, Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities and theirmore »embedded functional traits.« less

  12. AFRICAN AMERICAN PSYCHOLOGY Sample Syllabus

    E-Print Network [OSTI]

    Hopfinger, Joseph B.

    AFRICAN AMERICAN PSYCHOLOGY PSYC 503 Sample Syllabus Course Description and Overview: This course examines the psychology of the African American experience. We begin the course with an overview of Black/African American psychology as an evolving field of study and consider the Black/African American Psychology

  13. Design of bioaerosol sampling inlets 

    E-Print Network [OSTI]

    Nene, Rohit Ravindra

    2007-09-17

    An experimental investigation involving the design, fabrication, and testing of an ambient sampling inlet and two additional Stokes-scaled inlets is presented here. Testing of each inlet was conducted at wind speeds of 2, 8, and 24 km/h (0.55, 2...

  14. Considerations for realistic ECCS evaluation methodology for LWRs

    SciTech Connect (OSTI)

    Rohatgi, U.S.; Saha, P.; Chexal, V.K.

    1985-01-01

    This paper identifies the various phenomena which govern the course of large and small break LOCAs in LWRs, and affect the key parameters such as Peak Clad Temperature (PCT) and timing of the end of blowdown, beginning of reflood, PCT, and complete quench. A review of the best-estimate models and correlations for these phenomena in the current literature has been presented. Finally, a set of models have been recommended which may be incorporated in a present best-estimate code such as TRAC or RELAP5 in order to develop a realistic ECCS evaluation methodology for future LWRs and have also been compared with the requirements of current ECCS evaluation methodology as outlined in Appendix K of 10CFR50. 58 refs.

  15. Cost Methodology for Biomass Feedstocks: Herbaceous Crops and Agricultural Residues

    SciTech Connect (OSTI)

    Turhollow Jr, Anthony F; Webb, Erin; Sokhansanj, Shahabaddine

    2009-12-01

    This report describes a set of procedures and assumptions used to estimate production and logistics costs of bioenergy feedstocks from herbaceous crops and agricultural residues. The engineering-economic analysis discussed here is based on methodologies developed by the American Society of Agricultural and Biological Engineers (ASABE) and the American Agricultural Economics Association (AAEA). An engineering-economic analysis approach was chosen due to lack of historical cost data for bioenergy feedstocks. Instead, costs are calculated using assumptions for equipment performance, input prices, and yield data derived from equipment manufacturers, research literature, and/or standards. Cost estimates account for fixed and variable costs. Several examples of this costing methodology used to estimate feedstock logistics costs are included at the end of this report.

  16. A methodology for simultaneous modeling and control of chemical processes 

    E-Print Network [OSTI]

    Zeng, Tong

    1995-01-01

    controller has been developed. Relay mapping S has been applied for the first time in a feedback system. Simulations of this new methodology have been made in several cases, such as using different relay step sizes, and adding disturbance and parameter slow... drift. The simulation results show that the closed loop identification using relay mapping S represents process dynamics in an accurate way. Simulation results also show that the feedback system with relay mapping S has certain advantages over...

  17. Estimation and Reduction Methodologies for Fugitive Emissions from Equipment 

    E-Print Network [OSTI]

    Scataglia, A.

    1992-01-01

    and Reduction Methodologies for Fugitive Emissions from Equipment Anthony Scataglia, Branch Manager, Team, Incorporated, Webster, Texas ABSTRACT Environmental regulations have resulted in the need for industrial facilities to reduce fugitive emissions... from equipment leaks to their lowest possible level. This paper presents and compares approved methods outlined by the U.S. Environmental Protection Agency (EPA) for estimating fugitive emissions from equipment leaks, as well as strategies...

  18. Methodologies for Estimating Building Energy Savings Uncertainty: Review and Comparison 

    E-Print Network [OSTI]

    Baltazar, J.C.; Sun, Y.; Claridge, D.

    2014-01-01

    CONFERENCE FOR ENHANCED BUILDING OPERATIONS TSINGHUA UNIVERSITY – BEIJING, CHINA –SEPTEMBER 14 -17, 2014 Methodologies for Estimating Building Energy Savings Uncertainty: Review and Comparison Juan-Carlos Baltazar PhD, PE, Yifu Sun EIT, and David Claridge... PhD, P.E. International Conference for Enhanced Building Operations Tsinghua University – Beijing, China –September 14 -17, 2014 ESL-IC-14-09-11a Proceedings of the 14th International Conference for Enhanced Building Operations, Beijing, China...

  19. Methodology for predicting asphalt concrete overlay life against reflection cracking 

    E-Print Network [OSTI]

    Jayawickrama, Priyantha Warnasuriya

    1985-01-01

    METHODOLOGY FOR PREDICTING ASPHALT CONCRETE OVERLAY Lr 8 AGAINST REFLECTION CRACKING A Thesis by PRIYANTHA NARNASURIYA JAYAWICKRAMA Submitted to the Graduate College of Texas A8M University in Partial fulfillment of the requirements.... Experimental investigations carried out at Ohio State University ( 1, 2, 3) and Texas A8M University ( 4, 5, 6 ) have verified the applicability of fracture mechanics principles in predicting fatigue life of asphalt TIP OF THE CRACX /~ // N/i OVERLAY OLD...

  20. Economic and Financial Methodology for South Texas Irrigation Projects – RGIDECON© 

    E-Print Network [OSTI]

    Rister, M. Edward; Rogers, Callie S.; Lacewell, Ronald; Robinson, John; Ellis, John; Sturdivant, Allen

    2009-01-01

    COLLEGE OF AGRICULTURE AND LIFE SCIENCES TR-203 (Revised) 2009 Economic and Financial Methodology for South Texas Irrigation Projects ? RGIDECON? By: M. Edward Rister Callie S. Rogers Ronald D. Lacewell John R.... C. Robinson John R. Ellis Allen W. Sturdivant Texas AgriLife Research Texas AgriLife Extension Service Texas Water Resources Institute Texas Water Resources Institute Technical Report August 2009 (originally published October 2002...

  1. A Methodology for Estimating Construction Unit Bid Prices 

    E-Print Network [OSTI]

    Erbatur, Osman 1978-

    2012-11-28

    all bids) for the past three years. ? Develop three (sanitary sewer, water, pavement and storm drainage) construction item unit price databases using the data collected. The cost items for the three types of construction improvement projects... estimating provide more reasonable information, such as theoretical distribution functions. 13 Methodology The City of Fort Worth construction projects can be separated into three categories: paving and drainage, water, and sanitary sewer...

  2. Enzyme and methodology for the treatment of a biomass

    DOE Patents [OSTI]

    Thompson, Vicki S.; Thompson, David N.; Schaller, Kastli D.; Apel, William A.

    2010-06-01

    An enzyme isolated from an extremophilic microbe, and a method for utilizing same is described, and wherein the enzyme displays optimum enzymatic activity at a temperature of greater than about 80.degree. C., and a pH of less than about 2, and further may be useful in methodology including pretreatment of a biomass so as to facilitate the production of an end product.

  3. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect (OSTI)

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with ?errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some ?+? combination. Results: 20mm{sup 2} errors with intensity altered by ?20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ?50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though may be detected by visual inspection. This work was not funded by Varian Oncology Systems. Some authors have other work partly funded by Varian Oncology Systems.

  4. A margin based approach to determining sample sizes via tolerance bounds.

    SciTech Connect (OSTI)

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  5. Design methodology for rock excavations at the Yucca Mountain project

    SciTech Connect (OSTI)

    Alber, M.; Bieniawski, Z.T.

    1993-12-31

    The problems involved in the design of the proposed underground repository for high-level nuclear waste call for novel design approaches. Guidelines for the design are given by the Mission Plan Amendment in which licensing and regulatory aspects have to be satisfied. Moreover, systems engineering was proposed, advocating a top-down approach leading to the identification of discrete, implementable system elements. These objectives for the design process can be integrated in an engineering design methodology. While design methodologies for some engineering disciplines are available, they were of limited use for rock engineering because of the inherent uncertainties about the geologic media. Based on the axiomatic design approach of Suh, Bieniawski developed a methodology for design in rock. Design principles and design stages are clearly stated to assist in effective decision making. For overall performance goals, the domain of objectives is defined through components (DCs) - representing a design solution - satisfy the FRs, resulting in discrete, independent functional relations. Implementation is satisfied by evaluation and optimization of the design with respect to the constructibility of the design components.

  6. Flammability Assessment Methodology Program Phase I: Final Report

    SciTech Connect (OSTI)

    C. A. Loehr; S. M. Djordjevic; K. J. Liekhus; M. J. Connolly

    1997-09-01

    The Flammability Assessment Methodology Program (FAMP) was established to investigate the flammability of gas mixtures found in transuranic (TRU) waste containers. The FAMP results provide a basis for increasing the permissible concentrations of flammable volatile organic compounds (VOCs) in TRU waste containers. The FAMP results will be used to modify the ''Safety Analysis Report for the TRUPACT-II Shipping Package'' (TRUPACT-II SARP) upon acceptance of the methodology by the Nuclear Regulatory Commission. Implementation of the methodology would substantially increase the number of drums that can be shipped to the Waste Isolation Pilot Plant (WIPP) without repackaging or treatment. Central to the program was experimental testing and modeling to predict the gas mixture lower explosive limit (MLEL) of gases observed in TRU waste containers. The experimental data supported selection of an MLEL model that was used in constructing screening limits for flammable VOC and flammable gas concentrations. The MLEL values predicted by the model for individual drums will be utilized to assess flammability for drums that do not meet the screening criteria. Finally, the predicted MLEL values will be used to derive acceptable gas generation rates, decay heat limits, and aspiration time requirements for drums that do not pass the screening limits. The results of the program demonstrate that an increased number of waste containers can be shipped to WIPP within the flammability safety envelope established in the TRUPACT-II SARP.

  7. Risk Assessment of Cascading Outages: Methodologies and Challenges

    SciTech Connect (OSTI)

    Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei

    2012-05-31

    Abstract- This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses different approaches to this and suggests directions for future development of methodologies. The second paper summarizes the state of the art in modeling tools for risk assessment of cascading outages.

  8. Risk Assessment of Cascading Outages: Part I - Overview of Methodologies

    SciTech Connect (OSTI)

    Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei

    2011-07-31

    This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which will extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses diffeent approaches to this and suggests directions for future development of methodologies.

  9. Probabilistic Based Design Methodology for Solid Oxide Fuel Cell Stacks

    SciTech Connect (OSTI)

    Sun, Xin; Tartakovsky, Alexandre M.; Khaleel, Mohammad A.

    2009-05-01

    A probabilistic-based component design methodology is developed for solid oxide fuel cell (SOFC) stack. This method takes into account the randomness in SOFC material properties as well as the stresses arising from different manufacturing and operating conditions. The purpose of this work is to provide the SOFC designers a design methodology such that desired level of component reliability can be achieved with deterministic design functions using an equivalent safety factor to account for the uncertainties in material properties and structural stresses. Multi-physics-based finite element analyses were used to predict the electrochemical and thermal mechanical responses of SOFC stacks with different geometric variations and under different operating conditions. Failures in the anode and the seal were used as design examples. The predicted maximum principal stresses in the anode and the seal were compared with the experimentally determined strength characteristics for the anode and the seal respectively. Component failure probabilities for the current design were then calculated under different operating conditions. It was found that anode failure probability is very low under all conditions examined. The seal failure probability is relatively high, particularly for high fuel utilization rate under low average cell temperature. Next, the procedures for calculating the equivalent safety factors for anode and seal were demonstrated such that uniform failure probability of the anode and seal can be achieved. Analysis procedures were also included for non-normal distributed random variables such that more realistic distributions of strength and stress can be analyzed using the proposed design methodology.

  10. Simplified Plant Analysis Risk (SPAR) Human Reliability Analysis (HRA) Methodology: Comparisons with other HRA Methods

    SciTech Connect (OSTI)

    Byers, James Clifford; Gertman, David Ira; Hill, Susan Gardiner; Blackman, Harold Stabler; Gentillon, Cynthia Ann; Hallbert, Bruce Perry; Haney, Lon Nolan

    2000-08-01

    The 1994 Accident Sequence Precursor (ASP) human reliability analysis (HRA) methodology was developed for the U.S. Nuclear Regulatory Commission (USNRC) in 1994 by the Idaho National Engineering and Environmental Laboratory (INEEL). It was decided to revise that methodology for use by the Simplified Plant Analysis Risk (SPAR) program. The 1994 ASP HRA methodology was compared, by a team of analysts, on a point-by-point basis to a variety of other HRA methods and sources. This paper briefly discusses how the comparisons were made and how the 1994 ASP HRA methodology was revised to incorporate desirable aspects of other methods. The revised methodology was renamed the SPAR HRA methodology.

  11. Simplified plant analysis risk (SPAR) human reliability analysis (HRA) methodology: Comparisons with other HRA methods

    SciTech Connect (OSTI)

    J. C. Byers; D. I. Gertman; S. G. Hill; H. S. Blackman; C. D. Gentillon; B. P. Hallbert; L. N. Haney

    2000-07-31

    The 1994 Accident Sequence Precursor (ASP) human reliability analysis (HRA) methodology was developed for the U.S. Nuclear Regulatory Commission (USNRC) in 1994 by the Idaho National Engineering and Environmental Laboratory (INEEL). It was decided to revise that methodology for use by the Simplified Plant Analysis Risk (SPAR) program. The 1994 ASP HRA methodology was compared, by a team of analysts, on a point-by-point basis to a variety of other HRA methods and sources. This paper briefly discusses how the comparisons were made and how the 1994 ASP HRA methodology was revised to incorporate desirable aspects of other methods. The revised methodology was renamed the SPAR HRA methodology.

  12. Contagious error sources would need time travel to prevent quantum computation

    E-Print Network [OSTI]

    Gil Kalai; Greg Kuperberg

    2015-05-07

    We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.

  13. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    E-Print Network [OSTI]

    J. B. Whitmore; M. T. Murphy

    2014-11-18

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10 m s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100 \\AA) and entire spectrographs arms ($\\sim$1000--3000 \\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200 m s$^{-1}$ per 1000 \\AA. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, $\\alpha$. The spurious deviations in $\\alpha$ produced by the model closely match important aspects of the VLT--UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in $\\alpha$ from quasar absorption lines.

  14. Method and apparatus for detecting timing errors in a system oscillator

    DOE Patents [OSTI]

    Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  15. Rock Sampling | Open Energy Information

    Open Energy Info (EERE)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onRAPID/Geothermal/Exploration/ColoradoRemsenburg-Speonk, NewMichigan: Energy Resources JumpMtSampling Jump to:

  16. Aperiodic dynamical decoupling sequences in presence of pulse errors

    E-Print Network [OSTI]

    Wang, Zhi-Hui

    2011-01-01

    Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstra...

  17. The Importance of Run-time Error Detection Glenn R. Luecke 1

    E-Print Network [OSTI]

    Luecke, Glenn R.

    Iowa State University's High Performance Computing Group, Iowa State University, Ames, Iowa 50011, USA State University's High Performance Computing Group for evaluating run-time error detection capabilities

  18. A Key Recovery Attack on Error Correcting Code Based a Lightweight Security Protocol

    E-Print Network [OSTI]

    International Association for Cryptologic Research (IACR)

    become prevalent in various fields. Manufacturing, supply chain management and inventory control are some--Authentication, error correcting coding, lightweight, privacy, RFID, security ! 1 INTRODUCTION RFID technology has

  19. Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip Arthroplasty

    E-Print Network [OSTI]

    Ulidowski, Irek

    Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip. Wire markers are typically attached to the polyethylene acetabular component of the prosthesis so

  20. Choose and choose again: appearance-reality errors, pragmatics and logical ability

    E-Print Network [OSTI]

    Deák, Gedeon O; Enright, Brian

    2006-01-01

    Development, 62, 753–766. Speer, J.R. (1984). Two practicalolder still make errors (e.g. Speer, 1984), some preschool