Sample records for methodology sampling error

  1. APPROPRIATE ZOOPLANKTON SAMPLING METHODOLOGY FOR OTEC SITES

    E-Print Network [OSTI]

    Commins, M.L.

    2014-01-01T23:59:59.000Z

    Thermal Energy Conversion (OTEC) sites. Participants J.Zooplankton Sampling for OTEC Sites t~thodology Edited bywho gave an overview of the OTEC program, and the policies

  2. Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error 

    E-Print Network [OSTI]

    Ma, Liang

    2012-02-14T23:59:59.000Z

    This thesis proposes a methodology for determining sensitivity of building energy consumption of HVAC systems to sensor error. It is based on a series of simulations of a generic building, the model for which is based on several typical input...

  3. Design Methodology to trade off Power, Output Quality and Error Resiliency: Application to Color Interpolation Filtering

    E-Print Network [OSTI]

    Kambhampati, Subbarao

    Design Methodology to trade off Power, Output Quality and Error Resiliency: Application to Color,nbanerje,kaushik}@purdue.edu chaitali@asu.edu Abstract: Power dissipation and tolerance to process variations pose conflicting design-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems

  4. Sample covariance based estimation of Capon algorithm error probabilities

    E-Print Network [OSTI]

    Richmond, Christ D.

    The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...

  5. Methodology to quantify leaks in aerosol sampling system components

    E-Print Network [OSTI]

    Vijayaraghavan, Vishnu Karthik

    2004-11-15T23:59:59.000Z

    and that approach was used to measure the sealing integrity of a CAM and two kinds of filter holders. The methodology involves use of sulfur hexafluoride as a tracer gas with the device being tested operated under dynamic flow conditions. The leak rates...

  6. Methodology to quantify leaks in aerosol sampling system components 

    E-Print Network [OSTI]

    Vijayaraghavan, Vishnu Karthik

    2004-11-15T23:59:59.000Z

    Filter holders and continuous air monitors (CAMs) are used extensively in the nuclear industry. It is important to minimize leakage in these devices and in recognition of this consideration, a limit on leakage for sampling systems is specified...

  7. Quantifying Errors Associated with Satellite Sampling of Offshore Wind S.C. Pryor1,2

    E-Print Network [OSTI]

    1 Quantifying Errors Associated with Satellite Sampling of Offshore Wind Speeds S.C. Pryor1,2 , R, Bloomington, IN47405, USA. Tel: 1-812-855-5155. Fax: 1-812-855-1661 Email: spryor@indiana.edu 2 Dept. of Wind an attractive proposition for measuring wind speeds over the oceans because in principle they also offer

  8. DIESEL AEROSOL SAMPLING METHODOLOGY -CRC E-43 TECHNICAL SUMMARY AND CONCLUSIONS

    E-Print Network [OSTI]

    Minnesota, University of

    DIESEL AEROSOL SAMPLING METHODOLOGY - CRC E-43 TECHNICAL SUMMARY AND CONCLUSIONS University Safety and Health (NIOSH) 8/19/2002 #12;2 TECHNICAL SUMMARY AND CONCLUSIONS Introduction Diesel engines are used extensively in transportation. In the U.S., Diesel engines are most commonly used in the over

  9. The U-tube sampling methodology and real-time analysis of geofluids

    SciTech Connect (OSTI)

    Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

    2009-03-01T23:59:59.000Z

    The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood [1973], provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO{sub 2} storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO{sub 2} from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO{sub 2} storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

  10. Development of methodology to correct sampling error associated with FRM PM10 samplers

    E-Print Network [OSTI]

    Chen, Jing

    2009-05-15T23:59:59.000Z

    ). _______________ This dissertation follows the style and format of the Transactions of the ASAE. 2 The health effect of the dust was related to PM?s size and properties. In the past, agricultural dust was believed to be coarse particles and nontoxic to human, except...

  11. DEVELOPMENT OF METHODOLOGY AND FIELD DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE

    SciTech Connect (OSTI)

    Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

    2012-06-04T23:59:59.000Z

    This project developed methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of the fuel storage medium and determine the oxide thickness on the spent fuel basin materials. The overall objective of this project was to determine the amount of time fuel has spent in a storage basin to determine if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations. This project developed and validated forensic tools that can be used to predict the age and condition of spent nuclear fuels stored in liquid basins based on key physical, chemical and microbiological basin characteristics. Key parameters were identified based on a literature review, the parameters were used to design test cells for corrosion analyses, tools were purchased to analyze the key parameters, and these were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The key parameters identified in the literature review included chloride concentration, conductivity, and total organic carbon level. Focus was also placed on aluminum based cladding because of their application to weapons production. The literature review was helpful in identifying important parameters, but relationships between these parameters and corrosion rates were not available. Bench scale test systems were designed, operated, harvested, and analyzed to determine corrosion relationships between water parameters and water conditions, chemistry and microbiological conditions. The data from the bench scale system indicated that corrosion rates were dependent on total organic carbon levels and chloride concentrations. The highest corrosion rates were observed in test cells amended with sediment, a large microbial inoculum and an organic carbon source. A complete characterization test kit was field tested to characterize the SRS L-Area spent fuel basin. The sampling kit consisted of a TOC analyzer, a YSI multiprobe, and a thickness probe. The tools were field tested to determine their ease of use, reliability, and determine the quality of data that each tool could provide. Characterization was done over a two day period in June 2011, and confirmed that the L Area basin is a well operated facility with low corrosion potential.

  12. Analysis of Statistical Sampling in Microarchitecture Simulation: Metric, Methodology and Program Characterization

    E-Print Network [OSTI]

    Minnesota, University of

    , is a promising technique for estimating the performance of the benchmark program without executing the complete of these three parameters and their interactions on the accuracy of the performance estimate and simulation cost of samples measured as cost parameters. Finally, we characterize 21 SPEC CPU2000 benchmarks based on our

  13. A comparison of sample preparation methodology in the evaluation of geosynthetic clay liner (GCL) hydraulic conductivity

    SciTech Connect (OSTI)

    Siebken, J.R. [National Seal Co., Galesburg, IL (United States); Lucas, S. [Albarrie Naue Ltd., Barrie, Ontario (Canada)

    1997-11-01T23:59:59.000Z

    The method of preparing a single needle-punched GCL product for evaluation of hydraulic conductivity in a flexible wall permeameter was examined. The test protocol utilized for this evaluation was GRI Test Method GCL-2 Permeability of GCLs. The GCL product consisted of bentonite clay material supported by a woven and a non-woven geotextile on either side. The method preparation focused on the procedure for separating the test specimen from the larger sample and whether these methods produced difficulty in generating reliable test data. The methods examined included cutting with a razor knife, scissors, and a circular die with the perimeter of the test area under wet and dry conditions. In order to generate as much data as possible, tests were kept brief. Flow was monitored only long enough to determine whether or not preferential flow paths appeared to be present. The results appear to indicate that any of the methods involved will work. Difficulties arose not from the development of preferential flow paths around the edges of the specimens, but from the loss of bentonite from the edges during handling.

  14. Dynamic Planning and control Methodology : understanding and managing iterative error and change cycles in large-scale concurrent design and construction projects

    E-Print Network [OSTI]

    Lee, Sang Hyun, 1973-

    2006-01-01T23:59:59.000Z

    Construction projects are uncertain and complex in nature. One of the major driving forces that may account for these characteristics is iterative cycles caused by errors and changes. Errors and changes worsen project ...

  15. Mapping Transmission Risk of Lassa Fever in West Africa: The Importance of Quality Control, Sampling Bias, and Error Weighting

    E-Print Network [OSTI]

    Peterson, A. Townsend; Moses, Lina M.; Bausch, Daniel G.

    2014-08-08T23:59:59.000Z

    –78. 21. Panning M, Emmerich P, Olschlager S, Bojenko S, Koivogui L, et al. (2010) Laboratory diagnosis of Lassa fever, Liberia. Emerg Infect Dis 16: 1041–1043. 22. Fielding AH, Bell JF (1997) A review of methods for the assessment of prediction errors...

  16. Error detection method

    SciTech Connect (OSTI)

    Olson, Eric J.

    2013-06-11T23:59:59.000Z

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  17. Determining the Uncertainty Associated with Retrospective Air Sampling for Optimization Purposes

    SciTech Connect (OSTI)

    Hadlock, D.J.

    2003-10-03T23:59:59.000Z

    NUREG 1400 contains an acceptable methodology for determining the uncertainty associated with retrospective air sampling. The method is a fairly simple one in which both the systemic and random uncertainties, usually expressed as a percent error, are propagated using the square root of the sum of the squares. Historically, many people involved in air sampling have focused on the statistical counting error as the deciding factor of overall uncertainty in retrospective air sampling. This paper looks at not only the counting error but also other errors associated with the performance of retrospective air sampling.

  18. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    SciTech Connect (OSTI)

    Shine, E. P.; Poirier, M. R.

    2013-10-29T23:59:59.000Z

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy?s extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling directly from the large Tank Farm tanks is a difficult, if not unsolvable enterprise due to li

  19. Primordial 4He abundance: a determination based on the largest sample of HII regions with a methodology tested on model HII regions

    E-Print Network [OSTI]

    Izotov, Y I; Guseva, N G

    2013-01-01T23:59:59.000Z

    We verified the validity of the empirical method to derive the 4He abundance used in our previous papers by applying it to CLOUDY (v13.01) models. Using newly published HeI emissivities, for which we present convenient fits as well as the output CLOUDY case B hydrogen and HeI line intensities, we found that the empirical method is able to reproduce the input CLOUDY 4He abundance with an accuracy of better than 1%. The CLOUDY output data also allowed us to derive the non-recombination contribution to the intensities of the strongest Balmer hydrogen Halpha, Hbeta, Hgamma, and Hdelta emission lines and the ionisation correction factors for He. With these improvements we used our updated empirical method to derive the 4He abundances and to test corrections for several systematic effects in a sample of 1610 spectra of low-metallicity extragalactic HII regions, the largest sample used so far. From this sample we extracted a subsample of 111 HII regions with Hbeta equivalent width EW(Hbeta) > 150A, with excitation p...

  20. Integrated fiducial sample mount and software for correlated microscopy

    SciTech Connect (OSTI)

    Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge; Karen E. Wright

    2014-02-01T23:59:59.000Z

    A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mount methodology allows data to be efficiently and easily collected on different instruments for the same sample location.

  1. An Improved Technique for Reducing False Alarms Due to Soft Errors A significant fraction of soft errors in modern

    E-Print Network [OSTI]

    Polian, Ilia

    of soft errors in modern microprocessors has been reported to never lead to a system failure. Any techniques are enhanced by a methodology to handle soft errors on address bits. Furthermore, we demonstrate]. Consequently, many state-of-the art systems provide soft error detection and correction capabilities [Hass 89

  2. Analyzing sampling methodologies in semiconductor manufacturing

    E-Print Network [OSTI]

    Anthony, Richard M. (Richard Morgan), 1971-

    2004-01-01T23:59:59.000Z

    This thesis describes work completed during an internship assignment at Intel Corporation's process development and wafer fabrication manufacturing facility in Santa Clara, California. At the highest level, this work relates ...

  3. Low-Cost Hardening of Image Processing Applications Against Soft Errors Ilia Polian1,2

    E-Print Network [OSTI]

    Polian, Ilia

    , and their hardening against soft errors becomes an issue. We propose a methodology to identify soft errors as uncritical based on their impact on the system's functionality. We call a soft error uncritical if its impact are imperceivable for the human user of the system. We focus on soft errors in the motion esti- mation subsystem

  4. Human error contribution to nuclear materials-handling events

    E-Print Network [OSTI]

    Sutton, Bradley (Bradley Jordan)

    2007-01-01T23:59:59.000Z

    This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...

  5. Field error lottery

    SciTech Connect (OSTI)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01T23:59:59.000Z

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  6. Observability-aware Directed Test Generation for Soft Errors and Crosstalk Faults

    E-Print Network [OSTI]

    Mishra, Prabhat

    . In modern System- on-Chip (SoC) design methodology, it is found that regions where errors are detectedObservability-aware Directed Test Generation for Soft Errors and Crosstalk Faults Kanad Basu Syst emerged as an important component of any chip design methodology to detect both functional and electrical

  7. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01T23:59:59.000Z

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  8. Thermodynamics of error correction

    E-Print Network [OSTI]

    Sartori, Pablo

    2015-01-01T23:59:59.000Z

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Max...

  9. Quantum error control codes

    E-Print Network [OSTI]

    Abdelhamid Awad Aly Ahmed, Sala

    2008-10-10T23:59:59.000Z

    QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major... Subject: Computer Science QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY...

  10. Thermodynamics of error correction

    E-Print Network [OSTI]

    Pablo Sartori; Simone Pigolotti

    2015-04-24T23:59:59.000Z

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  11. Fault Detection Methodology for Caches in Reliable Modern VLSI Microprocessors based on Instruction Set

    E-Print Network [OSTI]

    Kouroupetroglou, Georgios

    , Professor #12;hardware defects that may occur during system operation by the increased soft error rateFault Detection Methodology for Caches in Reliable Modern VLSI Microprocessors based on Instruction-Based Self-Test (SBST) fault detection methodology for small embedded cache memories. The methodology

  12. Quantum Error Correction Workshop on

    E-Print Network [OSTI]

    Grassl, Markus

    Error Correction Avoiding Errors: Mathematical Model decomposition of the interaction algebra;Quantum Error Correction Designed Hamiltonians Main idea: "perturb the system to make it more stable" · fast (local) control operations = average Hamiltonian with more symmetry (cf. techniques from NMR

  13. Dynamic Prediction of Concurrency Errors

    E-Print Network [OSTI]

    Sadowski, Caitlin

    2012-01-01T23:59:59.000Z

    Relation 15 Must-Before Race Prediction 16 Implementation 17viii Abstract Dynamic Prediction of Concurrency Errors bySANTA CRUZ DYNAMIC PREDICTION OF CONCURRENCY ERRORS A

  14. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect (OSTI)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01T23:59:59.000Z

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  15. An Efficient Approach towards Mitigating Soft Errors Risks

    E-Print Network [OSTI]

    Sadi, Muhammad Sheikh; Uddin, Md Nazim; Jürjens, Jan

    2011-01-01T23:59:59.000Z

    Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today's nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant 'device shrinking' reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only 'critical code blocks' and 'critical variables' rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.

  16. Uncertainty and error in computational simulations

    SciTech Connect (OSTI)

    Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

    1997-10-01T23:59:59.000Z

    The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

  17. Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering

    E-Print Network [OSTI]

    Kreinovich, Vladik

    Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work

  18. Quasi-sparse eigenvector diagonalization and stochastic error correction

    E-Print Network [OSTI]

    Dean Lee

    2000-08-30T23:59:59.000Z

    We briefly review the diagonalization of quantum Hamiltonians using the quasi-sparse eigenvector (QSE) method. We also introduce the technique of stochastic error correction, which systematically removes the truncation error of the QSE result by stochastically sampling the contribution of the remaining basis states.

  19. Modular error embedding

    DOE Patents [OSTI]

    Sandford, II, Maxwell T. (Los Alamos, NM); Handel, Theodore G. (Los Alamos, NM); Ettinger, J. Mark (Los Alamos, NM)

    1999-01-01T23:59:59.000Z

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  20. Approaches to Quantum Error Correction

    E-Print Network [OSTI]

    Julia Kempe

    2006-12-21T23:59:59.000Z

    The purpose of this little survey is to give a simple description of the main approaches to quantum error correction and quantum fault-tolerance. Our goal is to convey the necessary intuitions both for the problems and their solutions in this area. After characterising quantum errors we present several error-correction schemes and outline the elements of a full fledged fault-tolerant computation, which works error-free even though all of its components can be faulty. We also mention alternative approaches to error-correction, so called error-avoiding or decoherence-free schemes. Technical details and generalisations are kept to a minimum.

  1. STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL

    E-Print Network [OSTI]

    Rudnyi, Evgenii B.

    to apply. The algorithm to maximize a likelihood function in the case of a non-linear physico - the same variances of errors 3.1. One-way classification 3.2. Linear regression 4. Real case (vaporizationSTATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL E.B. Rudnyi Department of Chemistry

  2. Unequal Error Protection Turbo Codes

    E-Print Network [OSTI]

    Henkel, Werner

    Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 #12;Unequal Error Protection Turbo Convolutional Codes / Turbo Codes 18 3.1 Structure

  3. Software Function Allocation Methodology

    E-Print Network [OSTI]

    O'Neal, Michael Ralph

    1988-01-01T23:59:59.000Z

    Necessary) Rapid Prototypes (Optional) 3. 4. 1 SFAM Step 1 ? Preparation If the Computer System Map (CSM) has not been created for the current overall hardware system, it is completed before the software function allocation process begins.... The methodology begins when the preconditions above have been met and the CSM is completed. The first step is to consult the list of software function allocation parameters provided as part of the methodology. The list of parameters will at first be sufficient...

  4. EIA - Sorry! Unexpected Error

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem NotEnergy,ARMFormsGasRelease Date:research community -- hosted byCold Fusion Error

  5. Uncertainty quantification and error analysis

    SciTech Connect (OSTI)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01T23:59:59.000Z

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  6. Methodologies for Continuous Cellular Tower Data Analysis

    E-Print Network [OSTI]

    Clauset, Aaron

    Methodologies for Continuous Cellular Tower Data Analysis Nathan Eagle1,2 , John A. Quinn3 cellular tower data from 215 randomly sampled subjects in a major urban city. We demonstrate the potential by tower transitions. The tower groupings from these unsupervised clustering techniques are subsequently

  7. Register file soft error recovery

    DOE Patents [OSTI]

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15T23:59:59.000Z

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  8. Errors Associated with Sampling and Measurement of Solids

    E-Print Network [OSTI]

    Clark, Shirley E.

    ­ Harrisburg; Middletown, PA, USA 2 University of Alabama, Tuscaloosa, AL, USA With assistance from many past

  9. Franklin Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit...

  10. Towards a Framework and a Design Methodology for Autonomic SoC

    E-Print Network [OSTI]

    Ould Ahmedou, Mohameden

    systems in spite of unsafe and faulty functions, due to fabrication faults, soft errors and design errorsTowards a Framework and a Design Methodology for Autonomic SoC Gabriel Lipsa, FZI, Microelectronic System Design, Karlsruhe, Germany Andreas Herkersdorf, Technical University of Munich, Institute

  11. A Scalable Soft Spot Analysis Methodology for Compound Noise Effects in Nano-meter Circuits

    E-Print Network [OSTI]

    California at San Diego, University of

    A Scalable Soft Spot Analysis Methodology for Compound Noise Effects in Nano-meter Circuits Chong methodology to study the vulnerability of digital ICs exposed to nano-meter noise and transient soft errors. First, we define "softness" as an important characteristic to gauge system vulnerability. Then several

  12. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect (OSTI)

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01T23:59:59.000Z

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information (sensitivity result) to reduce sampling number. (4) Allowing grid independence for scaled integral effect test (IET) simulation and real plant applications: (a) eliminate numerical uncertainty on scaling; (b) reduce experimental cost by allowing smaller scaled IET; (c) eliminate user effects. This paper will review the issues related to the current CSAU, introduce FSA, discuss a potential Q-PIRT process, and show simple examples to perform FSA. Finally, the general research direction and requirements to use FSA in a system analysis code will be discussed.

  13. Nested Quantum Error Correction Codes

    E-Print Network [OSTI]

    Zhuo Wang; Kai Sun; Hen Fan; Vlatko Vedral

    2009-09-28T23:59:59.000Z

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  14. Finding beam focus errors automatically

    SciTech Connect (OSTI)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01T23:59:59.000Z

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors. (LEW)

  15. Data& Error Analysis 1 DATA and ERROR ANALYSIS

    E-Print Network [OSTI]

    Mukasyan, Alexander

    Data& Error Analysis 1 DATA and ERROR ANALYSIS Performing the experiment and collecting data learned, you might get a better grade.) Data analysis should NOT be delayed until all of the data. This will help one avoid the problem of spending an entire class collecting bad data because of a mistake

  16. Pressure Change Measurement Leak Testing Errors

    SciTech Connect (OSTI)

    Pryor, Jeff M [ORNL] [ORNL; Walker, William C [ORNL] [ORNL

    2014-01-01T23:59:59.000Z

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  17. BASF's Energy Survey Methodology

    E-Print Network [OSTI]

    Theising, T. R.

    2005-01-01T23:59:59.000Z

    and cost breakdowns by utility types are identified to further analyze trends. Consideration is given to the review of the various energy supply contracts for alternative options that may exist. The consumption history is used to create a distribution...BASF?s Energy Survey Methodology Thomas R. Theising BASF Corporation operates several dozen manufacturing Sites within NAFTA and periodically conducts Energy Surveys at each Site. Although these manufacturing sites represent a variety...

  18. DIESEL AEROSOL SAMPLING METHODOLOGY -CRC E-43 EXECUTIVE SUMMARY

    E-Print Network [OSTI]

    Minnesota, University of

    used to evaluate and select basic options, or to perform feasibility studies or preliminary assessments Department of Energy National Renewable Energy Laboratory (DOE / NREL) Engine Manufacturers Association (EMA for the research team in the development of QA protocols for the research and final evaluation of project data

  19. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-01-01T23:59:59.000Z

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  20. Static Detection of Disassembly Errors

    SciTech Connect (OSTI)

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K.

    2009-10-13T23:59:59.000Z

    Static disassembly is a crucial ?rst step in reverse engineering executable ?les, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  1. Dynamic Prediction of Concurrency Errors

    E-Print Network [OSTI]

    Sadowski, Caitlin

    2012-01-01T23:59:59.000Z

    errors in systems code using smt solvers. In Computer Aideddata race witnesses by an SMT-based analysis. In NASA Formalscalability relies on a modern SMT solver and an e?cient

  2. Unequal error protection of subband coded bits

    E-Print Network [OSTI]

    Devalla, Badarinath

    1994-01-01T23:59:59.000Z

    Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...

  3. WRAP Module 1 sampling and analysis plan

    SciTech Connect (OSTI)

    Mayancsik, B.A.

    1995-03-24T23:59:59.000Z

    This document provides the methodology to sample, screen, and analyze waste generated, processed, or otherwise the responsibility of the Waste Receiving and Processing Module 1 facility. This includes Low-Level Waste, Transuranic Waste, Mixed Waste, and Dangerous Waste.

  4. Two-Layer Error Control Codes Combining Rectangular and Hamming Product Codes for Cache Error

    E-Print Network [OSTI]

    Zhang, Meilin

    We propose a novel two-layer error control code, combining error detection capability of rectangular codes and error correction capability of Hamming product codes in an efficient way, in order to increase cache error ...

  5. Harmonic Analysis Errors in Calculating Dipole,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    to reduce the harmonic field calculation errors. A conformal transfor- mation of a multipole magnet into a dipole reduces these errors. Dipole Magnet Calculations A triangular...

  6. Jitter compensation in sampling via polynomial least squares estimation

    E-Print Network [OSTI]

    Weller, Daniel Stuart

    Sampling error due to jitter, or noise in the sample times, affects the precision of analog-to-digital converters in a significant, nonlinear fashion. In this paper, a polynomial least squares (PLS) estimator is derived ...

  7. Distributed Error Confinement Extended Abstract

    E-Print Network [OSTI]

    Patt-Shamir, Boaz

    . These algorithms can serve as building blocks in more general reactive systems. Previous results in exploring locality in reactive systems were not error confined, and relied on the assump- tion (not used in current, that seems inherent for voting in reactive networks; its analysis leads to an interesting combinatorial

  8. A Method for Treating Discretization Error in Nondeterministic Analysis

    SciTech Connect (OSTI)

    Alvin, K.F.

    1999-01-27T23:59:59.000Z

    A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.

  9. Statistical evaluation of design-error related accidents

    SciTech Connect (OSTI)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01T23:59:59.000Z

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy.

  10. Sampling box

    DOE Patents [OSTI]

    Phillips, Terrance D. (617 Chestnut Ct., Aiken, SC 29803); Johnson, Craig (100 Midland Rd., Oak Ridge, TN 37831-0895)

    2000-01-01T23:59:59.000Z

    An air sampling box that uses a slidable filter tray and a removable filter cartridge to allow for the easy replacement of a filter which catches radioactive particles is disclosed.

  11. Non-Gaussian numerical errors versus mass hierarchy

    E-Print Network [OSTI]

    Y. Meurice; M. B. Oktay

    2000-05-12T23:59:59.000Z

    We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.

  12. Adaptive Sampling in Hierarchical Simulation

    SciTech Connect (OSTI)

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09T23:59:59.000Z

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  13. Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error

    E-Print Network [OSTI]

    Ma, Liang

    2012-02-14T23:59:59.000Z

    parameters. There are a total of eight scenarios considered in this simulation. The simulation tool was developed based on Excel. The control parameters examined include room temperature, cold deck temperature, hot deck temperature, pump pressure, and fan...

  14. Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities

    SciTech Connect (OSTI)

    Auflick, Jack L.

    1999-04-21T23:59:59.000Z

    Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

  15. Spent nuclear fuel sampling strategy

    SciTech Connect (OSTI)

    Bergmann, D.W.

    1995-02-08T23:59:59.000Z

    This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation.

  16. Approximate error conjugation gradient minimization methods

    DOE Patents [OSTI]

    Kallman, Jeffrey S

    2013-05-21T23:59:59.000Z

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  17. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    E-Print Network [OSTI]

    Lewis Mitchell; Alberto Carrassi

    2014-09-02T23:59:59.000Z

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through the proposed model error treatments, and that both methods require far less parameter tuning than the standard approach. Furthermore, the proposed approach is simple to implement within a pre-existing ensemble based scheme. The general implications for the use of the proposed approach in the framework of square-root filters such as the ETKF are also discussed.

  18. Verification of unfold error estimates in the unfold operator code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    1997-01-01T23:59:59.000Z

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  19. Error handling strategies in multiphase inverse modeling

    SciTech Connect (OSTI)

    Finsterle, S.; Zhang, Y.

    2010-12-01T23:59:59.000Z

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  20. JITTER COMPENSATION IN SAMPLING VIA POLYNOMIAL LEAST SQUARES ESTIMATION

    E-Print Network [OSTI]

    Goyal, Vivek K

    JITTER COMPENSATION IN SAMPLING VIA POLYNOMIAL LEAST SQUARES ESTIMATION Daniel S. Weller and Vivek Science Email: {dweller, vgoyal}@mit.edu ABSTRACT Sampling error due to jitter, or noise in the sample independent jitter and additive noise, as an alternative to the linear least squares (LLS) estimator. After

  1. Estimating IMU heading error from SAR images.

    SciTech Connect (OSTI)

    Doerry, Armin Walter

    2009-03-01T23:59:59.000Z

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  2. Flux recovery and a posteriori error estimators

    E-Print Network [OSTI]

    2010-05-20T23:59:59.000Z

    bility and the local efficiency bounds for this estimator are established provided that the ... For simple model problems, the energy norm of the true error is equal.

  3. Original Article Error Bounds and Metric Subregularity

    E-Print Network [OSTI]

    2014-06-18T23:59:59.000Z

    theory of error bounds of extended real-valued functions. Another objective is to ... Another observation is that neighbourhood V in the original definition of metric.

  4. Bayesian Post-Processing Methods for Jitter Mitigation in Sampling

    E-Print Network [OSTI]

    Weller, Daniel Stuart

    Minimum mean-square error (MMSE) estimators of signals from samples corrupted by jitter (timing noise) and additive noise are nonlinear, even when the signal parameters and additive noise have normal distributions. This ...

  5. Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Milligan, M.

    2011-07-01T23:59:59.000Z

    This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.

  6. 1224 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 3, AUGUST 2005 Topology Error Identification for the

    E-Print Network [OSTI]

    Frandsen, Jannette B.

    of the system dictate hardware and soft- ware applications that are not found in terrestrial power systems. Using previ- ously developed state estimation algorithms as a starting point, a methodology for system1224 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 3, AUGUST 2005 Topology Error Identification

  7. ISE System Development Methodology Manual

    SciTech Connect (OSTI)

    Hayhoe, G.F.

    1992-02-17T23:59:59.000Z

    The Information Systems Engineering (ISE) System Development Methodology Manual (SDM) is a framework of life cycle management guidelines that provide ISE personnel with direction, organization, consistency, and improved communication when developing and maintaining systems. These guide-lines were designed to allow ISE to build and deliver Total Quality products, and to meet the goals and requirements of the US Department of Energy (DOE), Westinghouse Savannah River Company, and Westinghouse Electric Corporation.

  8. Implementation impacts of PRL methodology

    SciTech Connect (OSTI)

    Caudill, J.A.; Krupa, J.F.; Meadors, R.E.; Odum, J.V.; Rodrigues, G.C.

    1993-02-01T23:59:59.000Z

    This report responds to a DOE-SR request to evaluate the impacts from implementation of the proposed Plutonium Recovery Limit (PRL) methodology. The PRL Methodology is based on cost minimization for decisions to discard or recover plutonium contained in scrap, residues, and other plutonium bearing materials. Implementation of the PRL methodology may result in decisions to declare as waste certain plutonium bearing materials originally considered to be a recoverable plutonium product. Such decisions may have regulatory impacts, because any material declared to be waste would immediately be subject to provisions of the Resource Conservation and Recovery Act (RCRA). The decision to discard these materials will have impacts on waste storage, treatment, and disposal facilities. Current plans for the de-inventory of plutonium processing facilities have identified certain materials as candidates for discard based upon the economic considerations associated with extending the operating schedules for recovery of the contained plutonium versus potential waste disposal costs. This report evaluates the impacts of discarding those materials as proposed by the F Area De-Inventory Plan and compares the De-Inventory Plan assessments with conclusions from application of the PRL. The impact analysis was performed for those materials proposed as potential candidates for discard by the De-Inventory Plan. The De-Inventory Plan identified 433 items, containing approximately 1% of the current SRS Pu-239 inventory, as not appropriate for recovery as the site moves to complete the mission of F-Canyon and FB-Line. The materials were entered into storage awaiting recovery as product under the Department`s previous Economic Discard Limit (EDL) methodology which valued plutonium at its incremental cost of production in reactors. An application of Departmental PRLs to the subject 433 items revealed that approximately 40% of them would continue to be potentially recoverable as product plutonium.

  9. Linear Signal Reconstruction from Jittered Sampling

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Linear Signal Reconstruction from Jittered Sampling Alessandro Nordio (1) , Carla jitter, which is based on the analysis of the mean square error (MSE) between the reconstructed sig- nal of digital signal reconstruction as a function of the clock jitter, number of quantization bits, signal

  10. Energy Efficiency Indicators Methodology Booklet

    SciTech Connect (OSTI)

    Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

    2010-05-01T23:59:59.000Z

    This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

  11. Error Mining on Dependency Trees Claire Gardent

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Error Mining on Dependency Trees Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-l`es-Nancy, F-54600, France shashi.narayan@loria.fr Abstract In recent years, error mining approaches were propose an algorithm for mining trees and ap- ply it to detect the most likely sources of gen- eration

  12. SEU induced errors observed in microprocessor systems

    SciTech Connect (OSTI)

    Asenek, V.; Underwood, C.; Oldfield, M. [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre] [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre; Velazco, R.; Rezgui, S.; Cheynet, P. [TIMA Lab., Grenoble (France)] [TIMA Lab., Grenoble (France); Ecoffet, R. [Centre National d`Etudes Spatiales, Toulouse (France)] [Centre National d`Etudes Spatiales, Toulouse (France)

    1998-12-01T23:59:59.000Z

    In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.

  13. Remarks on statistical errors in equivalent widths

    E-Print Network [OSTI]

    Klaus Vollmann; Thomas Eversberg

    2006-07-03T23:59:59.000Z

    Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.

  14. Activation Energies from Transition Path Sampling Simulations

    E-Print Network [OSTI]

    Dellago, Christoph

    unavailable for processes occurring in complex systems. Since in this method activation energies diatomic immersed in a bath of repulsive soft particles. Keywords: Activation energy; Computer simulation on the transition path sampling methodology, our approach to determine activation energies does not require full

  15. Formalism for Simulation-based Optimization of Measurement Errors in High Energy Physics

    E-Print Network [OSTI]

    Yuehong Xie

    2009-04-29T23:59:59.000Z

    Miminizing errors of the physical parameters of interest should be the ultimate goal of any event selection optimization in high energy physics data analysis involving parameter determination. Quick and reliable error estimation is a crucial ingredient for realizing this goal. In this paper we derive a formalism for direct evaluation of measurement errors using the signal probability density function and large fully simulated signal and background samples without need for data fitting and background modelling. We illustrate the elegance of the formalism in the case of event selection optimization for CP violation measurement in B decays. The implication of this formalism on choosing event variables for data analysis is discussed.

  16. Stabilizer Formalism for Operator Quantum Error Correction

    E-Print Network [OSTI]

    Poulin, D

    2005-01-01T23:59:59.000Z

    Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the language of the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure without affecting its essential properties. This opens the path to possible improvement of the error threshold of fault tolerant quantum computing. We also derive a modified Hamming bound that applies to all stabilizer codes, including degenerate ones.

  17. Stabilizer Formalism for Operator Quantum Error Correction

    E-Print Network [OSTI]

    David Poulin

    2006-06-14T23:59:59.000Z

    Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 4 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.

  18. Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES

    E-Print Network [OSTI]

    Zacks, Jeffrey M.

    Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES A computational model of event segmentation from perceptual prediction. Jeremy R. Reynolds, Jeffrey M. Zacks, and Todd S. Braver Washington University Manuscript #12;Prediction Error and Event Boundaries 2 People tend

  19. Field evaluation of methodology for measurement of cadmium in stationary-source stack gases. Final report

    SciTech Connect (OSTI)

    Moseman, R.F.; Bath, D.B.; McReynolds, J.R.; Holder, D.J.; Sykes, A.L.

    1986-12-01T23:59:59.000Z

    A laboratory and field-evaluation study was done to develop methodology for the measurement of cadmium in stationary-source stack emissions. Field evaluations were performed at a municipal solid-waste incinerator and a sewage-sludge incinerator. The methodology was tested through the laboratory and field-sampling validation phases to evaluate precision and accuracy of the proposed method. Colocated, quadruplicate flue-gas samples of nominally 30 and 60 dscf in 1 and 2 hours sampling time were collected to assure an adequate cadmium content, a representative sample, and the production of data to validate the method in terms of between-train precision. The overall accuracy and precision of the analysis procedure were 89.2% and 1.7%, respectively. The detection limit of the atomic absorption instrument was 0.03 ug/mL. The methodology proved to be a reliable sampling approach to determine cadmium emissions from the stationary sources tested.

  20. Methodology for Estimating Solar Potential on Multiple Building Rooftops for Photovoltaic Systems

    SciTech Connect (OSTI)

    Kodysh, Jeffrey B [ORNL; Omitaomu, Olufemi A [ORNL; Bhaduri, Budhendra L [ORNL; Neish, Bradley S [ORNL

    2013-01-01T23:59:59.000Z

    In this paper, a methodology for estimating solar potential on multiple building rooftops is presented. The objective of this methodology is to estimate the daily or monthly solar radiation potential on individual buildings in a city/region using Light Detection and Ranging (LiDAR) data and a geographic information system (GIS) approach. Conceptually, the methodology is based on the upward-looking hemispherical viewshed algorithm, but applied using an area-based modeling approach. The methodology considers input parameters, such as surface orientation, shadowing effect, elevation, and atmospheric conditions, that influence solar intensity on the earth s surface. The methodology has been implemented for some 212,000 buildings in Knox County, Tennessee, USA. Based on the results obtained, the methodology seems to be adequate for estimating solar radiation on multiple building rooftops. The use of LiDAR data improves the radiation potential estimates in terms of the model predictive error and the spatial pattern of the model outputs. This methodology could help cities/regions interested in sustainable projects to quickly identify buildings with higher potentials for roof-mounted photovoltaic systems.

  1. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    SciTech Connect (OSTI)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27T23:59:59.000Z

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  2. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30T23:59:59.000Z

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single path—the shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

  3. Estimating the error in simulation prediction over the design space

    SciTech Connect (OSTI)

    Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)

    2003-01-01T23:59:59.000Z

    This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.

  4. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    SciTech Connect (OSTI)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-02-27T23:59:59.000Z

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume tracking methodology helped the USACE Buffalo District track soil quantity changes from projected excavation work over time and across space, providing the basis for an explanation of some of the project cost and schedule variances.

  5. Quantum error-correcting codes and devices

    DOE Patents [OSTI]

    Gottesman, Daniel (Los Alamos, NM)

    2000-10-03T23:59:59.000Z

    A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.

  6. Organizational Errors: Directions for Future Research

    E-Print Network [OSTI]

    Carroll, John Stephen

    The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...

  7. Quantum Error Correction for Quantum Memories

    E-Print Network [OSTI]

    Barbara M. Terhal

    2015-01-20T23:59:59.000Z

    Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.

  8. Parameters and error of a theoretical model

    SciTech Connect (OSTI)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01T23:59:59.000Z

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.

  9. Sampling diffusive transition paths

    E-Print Network [OSTI]

    F. Miller III, Thomas

    2009-01-01T23:59:59.000Z

    Sampling di?usive transition paths Thomas F. Miller III ?the algorithm to sample the transition path ensemble for thedynamics I. INTRODUCTION Transition path sampling (TPS) is a

  10. Evaluating operating system vulnerability to memory errors.

    SciTech Connect (OSTI)

    Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian

    2012-05-01T23:59:59.000Z

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  11. The Error-Pattern-Correcting Turbo Equalizer

    E-Print Network [OSTI]

    Alhussien, Hakim

    2010-01-01T23:59:59.000Z

    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.

  12. Simulation Enabled Safeguards Assessment Methodology

    SciTech Connect (OSTI)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01T23:59:59.000Z

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  13. Methodology for flammable gas evaluations

    SciTech Connect (OSTI)

    Hopkins, J.D., Westinghouse Hanford

    1996-06-12T23:59:59.000Z

    There are 177 radioactive waste storage tanks at the Hanford Site. The waste generates flammable gases. The waste releases gas continuously, but in some tanks the waste has shown a tendency to trap these flammable gases. When enough gas is trapped in a tank`s waste matrix, it may be released in a way that renders part or all of the tank atmosphere flammable for a period of time. Tanks must be evaluated against previously defined criteria to determine whether they can present a flammable gas hazard. This document presents the methodology for evaluating tanks in two areas of concern in the tank headspace:steady-state flammable-gas concentration resulting from continuous release, and concentration resulting from an episodic gas release.

  14. A systems approach to reducing utility billing errors

    E-Print Network [OSTI]

    Ogura, Nori

    2013-01-01T23:59:59.000Z

    Many methods for analyzing the possibility of errors are practiced by organizations who are concerned about safety and error prevention. However, in situations where the error occurrence is random and difficult to track, ...

  15. Error Detection and Recovery for Robot Motion Planning with Uncertainty

    E-Print Network [OSTI]

    Donald, Bruce Randall

    1987-07-01T23:59:59.000Z

    Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has ...

  16. Methodology for Validating Building Energy Analysis Simulations

    SciTech Connect (OSTI)

    Judkoff, R.; Wortman, D.; O'Doherty, B.; Burch, J.

    2008-04-01T23:59:59.000Z

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  17. Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology...

    Broader source: Energy.gov (indexed) [DOE]

    SECURE Methodology to Identify Potential Reductions in Utility and Process Energy Consumption (July 2005) More Documents & Publications Solutia: Utilizing Sub-Metering to Drive...

  18. Geothermal: Sponsored by OSTI -- Methodologies for Reservoir...

    Office of Scientific and Technical Information (OSTI)

    Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Geothermal Technologies Legacy Collection HelpFAQ | Site Map | Contact Us HomeBasic Search About...

  19. Running jobs error: "inet_arp_address_lookup"

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    jobs error: "inetarpaddresslookup" Resolved: Running jobs error: "inetarpaddresslookup" September 22, 2013 by Helen He (0 Comments) Symptom: After the Hopper August 14...

  20. Global Error bounds for systems of convex polynomials over ...

    E-Print Network [OSTI]

    2011-11-11T23:59:59.000Z

    This paper is devoted to study the Lipschitzian/Holderian type global error ...... set is not neccessarily compact, we obtain the Hölder global error bound result.

  1. EMCAS, an evaluation methodology for safeguards and security systems

    SciTech Connect (OSTI)

    Eggers, R.F.; Giese, E.W.; Bichl, F.J.

    1987-07-01T23:59:59.000Z

    EMCAS is an evaluation methodology for safeguards and security systems. It provides a score card of projected or actual system performance for several areas of system operation. In one area, the performance of material control and accounting and security systems, which jointly defend against the insider threat to divert or steal special nuclear material (SNM) using stealth and deceit, is evaluated. Time-dependent and time-independent risk equations are used for both diversion and theft risk calculations. In the case of loss detection by material accounting, a detailed timeliness model is provided to determine the combined effects of loss detection sensitivity and timeliness on the overall effectiveness of the material accounting detection procedure. Calculated risks take into account the capabilities of process area containment/surveillance, material accounting mass balance tests, and physical protection barriers and procedures. In addition, EMCAS evaluates the Material Control and Accounting (MCandA) System in the following areas: (1) system capability to detect errors in the official book inventory of SNM, using mass balance accounting methods, (2) system capability to prevent errors from entering the nuclear material data base during periods of operation between mass balance tests, (3) time to conduct inventories and resolve alarms, and (4) time lost from production to carry out material control and accounting loss detection activities.

  2. EMCAS: An evaluation methodology for safeguards and security systems

    SciTech Connect (OSTI)

    Eggers, R.F.; Giese, E.W.; Bichl, F.J.

    1987-01-01T23:59:59.000Z

    EMCAS is an evaluation methodology for safeguards and security systems. It provides a score card of projected or actual system performance for several areas of system operation. In one area, the performance of material control and accounting and security systems, which jointly defend against the insider threat to divert or steal special nuclear material (SNM) using stealth and deceit, is evaluated. Time-dependent and time-independent risk equations are used for both diversion and theft risk calculations. In the case of loss detection by material accounting, a detailed timeliness model is provided to determine the combined effects of loss detection sensitivity and timeliness on the overall effectiveness of the material accounting detection procedure. Calculated risks take into account the capabilities of process area containment/surveillance, material accounting mass balance tests, and physical protection barriers and procedures. In addition, EMCAS evaluates the Material Control and Accounting (MC and A) System in the following areas: (1) system capability to detect errors in the official book inventory of SNM, using mass balance accounting methods, (2) system capability to prevent errors from entering the nuclear material data base during periods of operation between mass balance tests, (3) time to conduct inventories and resolve alarms, and (4) time lost from production to carry out material control and accounting loss detection activities. 3 figs., 5 tabs.

  3. Optimal error estimates for corrected trapezoidal rules

    E-Print Network [OSTI]

    Talvila, Erik

    2012-01-01T23:59:59.000Z

    Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.

  4. Integrating human related errors with technical errors to determine causes behind offshore accidents

    E-Print Network [OSTI]

    Aamodt, Agnar

    Integrating human related errors with technical errors to determine causes behind offshore of offshore accidents there is a continuous focus on safety improvements. An improved evaluation method concepts in the model are structured in hierarchical categories, based on well-established knowledge

  5. Running head: STEREOTYPE THREAT REDUCES MEMORY ERRORS Stereotype threat can reduce older adults' memory errors

    E-Print Network [OSTI]

    Mather, Mara

    Running head: STEREOTYPE THREAT REDUCES MEMORY ERRORS Stereotype threat can reduce older adults, 90089-0191. Phone: 213-740-6772. Email: barbersa@usc.edu #12;STEREOTYPE THREAT REDUCES MEMORY ERRORS 2 Abstract (144 words) Stereotype threat often incurs the cost of reducing the amount of information

  6. MMIII* by M. Kosticwww.kostic.niu.edu Error or Uncertainty Analysis

    E-Print Network [OSTI]

    Kostic, Milivoje M.

    Gas Analysis SO2 , NO, NO2 , CO, CO2 , THC, O2Sample Tanks Particle Probe Gas Probe Exhaust DMA1 © MMIII* by M. Kosticwww.kostic.niu.edu Unleashing Error or Uncertainty Analysis of Measurement - Differential Mobility Analyzer CNC ­ Condensation Nuclei Counter HPLPC ­ High Pressure Large Particle Counter

  7. Laser Phase Errors in Seeded FELs

    SciTech Connect (OSTI)

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28T23:59:59.000Z

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  8. On the Error in QR Integration

    E-Print Network [OSTI]

    Dieci, Luca; Van Vleck, Erik

    2008-03-07T23:59:59.000Z

    ] . . . [R(t2, t1) +E2][R(t1, t0) +E1]R(t0) , k = 1, 2, . . . , where Q(tk) is the exact Q-factor at tk and the triangular transitions R(tj , tj?1) are also the exact ones. Moreover, the factors Ej , j = 1, . . . , k, are bounded in norm by the local error... committed during integration of the relevant differential equations; see Theorems 3.1 and 3.16.” We will henceforth simply write (2.7) ?Ej? ? ?, j = 1, 2, . . . , and stress that ? is computable, in fact controllable, in terms of local error tolerances...

  9. 2002 E.M. Aboulhamid 1 Methodology

    E-Print Network [OSTI]

    Aboulhamid, El Mostapha

    library and a methodology ­ create a cycle-accurate model of software algorithms ­ hardware architecture Types 4-valued logic Bits and Bit Vectors Arbitrary Precision Integers Fixed-point types Structural Libraries Verification library TLM library, etc Methodology-Specific Libraries Master/Slave library, etc

  10. High Performance Dense Linear System Solver with Soft Error Resilience

    E-Print Network [OSTI]

    Dongarra, Jack

    High Performance Dense Linear System Solver with Soft Error Resilience Peng Du, Piotr Luszczek systems, and in some scientific applications C/R is not applicable for soft error at all due to error) high performance dense linear system solver with soft error resilience. By adopting a mathematical

  11. Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Ela, E.; Milligan, M.

    2011-10-01T23:59:59.000Z

    This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.

  12. Verifying Volume Rendering Using Discretization Error Analysis

    E-Print Network [OSTI]

    Kirby, Mike

    Verifying Volume Rendering Using Discretization Error Analysis Tiago Etiene, Daniel Jo¨nsson, Timo--We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization

  13. MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS

    E-Print Network [OSTI]

    Heinemann, Detlev

    commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head

  14. Hierarchical Classification of Documents with Error Control

    E-Print Network [OSTI]

    King, Kuo Chin Irwin

    Hierarchical Classification of Documents with Error Control Chun-hung Cheng1 , Jian Tang2 , Ada Wai is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents). The traditional method

  15. Hierarchical Classification of Documents with Error Control

    E-Print Network [OSTI]

    Fu, Ada Waichee

    Hierarchical Classification of Documents with Error Control Chun­hung Cheng 1 , Jian Tang 2 , Ada. Classification is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents

  16. Discussion on common errors in analyzing sea level accelerations, solar trends and global warming

    E-Print Network [OSTI]

    Scafetta, Nicola

    2013-01-01T23:59:59.000Z

    Errors in applying regression models and wavelet filters used to analyze geophysical signals are discussed: (1) multidecadal natural oscillations (e.g. the quasi 60-year Atlantic Multidecadal Oscillation (AMO), North Atlantic Oscillation (NAO) and Pacific Decadal Oscillation (PDO)) need to be taken into account for properly quantifying anomalous accelerations in tide gauge records such as in New York City; (2) uncertainties and multicollinearity among climate forcing functions prevent a proper evaluation of the solar contribution to the 20th century global surface temperature warming using overloaded linear regression models during the 1900-2000 period alone; (3) when periodic wavelet filters, which require that a record is pre-processed with a reflection methodology, are improperly applied to decompose non-stationary solar and climatic time series, Gibbs boundary artifacts emerge yielding misleading physical interpretations. By correcting these errors and using optimized regression models that reduce multico...

  17. Plasma dynamics and a significant error of macroscopic averaging

    E-Print Network [OSTI]

    Marek A. Szalek

    2005-05-22T23:59:59.000Z

    The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.

  18. Verification of unfold error estimates in the UFO code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1996-07-01T23:59:59.000Z

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  19. Development of a statistically based access delay timeline methodology.

    SciTech Connect (OSTI)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01T23:59:59.000Z

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  20. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect (OSTI)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01T23:59:59.000Z

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  1. Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Mesurements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office511041cloth DocumentationProductsAlternative FuelsSanta FeAuthorization|Energy

  2. Statistical Error analysis of Nucleon-Nucleon phenomenological potentials

    E-Print Network [OSTI]

    R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola

    2014-06-10T23:59:59.000Z

    Nucleon-Nucleon potentials are commonplace in nuclear physics and are determined from a finite number of experimental data with limited precision sampling the scattering process. We study the statistical assumptions implicit in the standard least squares fitting procedure and apply, along with more conventional tests, a tail sensitive quantile-quantile test as a simple and confident tool to verify the normality of residuals. We show that the fulfilment of normality tests is linked to a judicious and consistent selection of a nucleon-nucleon database. These considerations prove crucial to a proper statistical error analysis and uncertainty propagation. We illustrate these issues by analyzing about 8000 proton-proton and neutron-proton scattering published data. This enables the construction of potentials meeting all statistical requirements necessary for statistical uncertainty estimates in nuclear structure calculations.

  3. A generalized optimization methodology for isotope management

    E-Print Network [OSTI]

    Massie, Mark (Mark Edward)

    2010-01-01T23:59:59.000Z

    This research, funded by the Department of Energy's Advanced Fuel Cycle Initiative Fellowship, was focused on developing a new approach to studying the nuclear fuel cycle: instead of using the trial and error approach ...

  4. ESPC IDIQ Contract Sample

    Broader source: Energy.gov [DOE]

    Document displays a sample indefinite delivery, indefinite quantity (IDIQ) energy savings performance contract (ESPC).

  5. Initial quantification of human error associated with specific instrumentation and control system components in licensed nuclear power plants

    SciTech Connect (OSTI)

    Luckas, W.J. Jr.; Lettieri, V.; Hall, R.E.

    1982-02-01T23:59:59.000Z

    This report provides a methodology for the initial quantification of specific categories of human errors made in conjunction with several instrumentation and control (I and C) system components operated, maintained, and tested in licensed nuclear power plants. The resultant human error rates (HER) provide the first real systems bases of comparison for the existing derived and/or best judgement equivalent set of such rates or probabilities. These calculated error rates also provide the first real indication of human performance as it relates directly to specific tasks in nuclear plants. This work of developing specific HERs is both an extension of and an outgrowth of the generic HERs developed for safety system pumpc and valves as reported in NUREG/CR-1880.

  6. Initial quantification of human error associated with specific instrumentation and control system components in licensed nuclear power plants

    SciTech Connect (OSTI)

    Luckas, W.J. Jr.; Lettieri, V.; Hall, R.E.

    1982-02-01T23:59:59.000Z

    This report provides a methodology for the initial quantification of specific categories of human errors made in conjunction with several instrumentation and control (I and C) system components operated, maintained, and tested in licensed nuclear power plants. The resultant human error rates (HER) provide the first real systems bases of comparison for the existing derived and/or best judgement equivalent set of such rates or probabilities. These calculated error rates also provide the first real indication of human performance as it relates directly to specific tasks in nuclear plants. This work of developing specific HERs is both an extension of and an outgrowth of the generic HERs developed for safety system pumps and valves as reported in NUREG/CR-1880.

  7. NASA Surface meteorology and Solar Energy: Methodology

    E-Print Network [OSTI]

    Firestone, Jeremy

    1 NASA Surface meteorology and Solar Energy: Methodology Energy Technology (RET) projects. These climatological profiles are used for designing systems that have of the renewable energy resource potential can be determined for any location on the globe. That estimate may

  8. Handling uncertainty in DEX methodology Martin Znidarsic

    E-Print Network [OSTI]

    Bohanec, Marko

    URPDM2010 1 Handling uncertainty in DEX methodology Martin Znidarsic Jozef Stefan Institute, Jamova cesta 39, martin.znidarsic@ijs.si Marko Bohanec Jozef Stefan Institute, Jamova cesta 39, marko

  9. Geologic selection methodology for transportation corridor routing 

    E-Print Network [OSTI]

    Shultz, Karin Wilson

    2002-01-01T23:59:59.000Z

    A lack of planning techniques and processes on long, linear, cut and cover-tunneling route transportation systems has resulted because of the advancement of transportation systems into underground corridors. The proposed methodology is tested...

  10. Geologic selection methodology for transportation corridor routing

    E-Print Network [OSTI]

    Shultz, Karin Wilson

    2002-01-01T23:59:59.000Z

    A lack of planning techniques and processes on long, linear, cut and cover-tunneling route transportation systems has resulted because of the advancement of transportation systems into underground corridors. The proposed methodology is tested...

  11. Flexible Electronics: Materials, Circuits, and Design Methodology

    E-Print Network [OSTI]

    Kim, Chris H.

    Electronics: Today Display Solar cell Battery 4 #12;Next Generation Flexible Electronics Problem: Traumatic system Proposed EEG system Electrode sheet Flexible electronics ... ... ... Next Generation FlexibleFlexible Electronics: Materials, Circuits, and Design Methodology Chris H. Kim Dept. of Electrical

  12. Quantum Latin squares and unitary error bases

    E-Print Network [OSTI]

    Benjamin Musto; Jamie Vicary

    2015-04-10T23:59:59.000Z

    In this paper we introduce quantum Latin squares, combinatorial quantum objects which generalize classical Latin squares, and investigate their applications in quantum computer science. Our main results are on applications to unitary error bases (UEBs), basic structures in quantum information which lie at the heart of procedures such as teleportation, dense coding and error correction. We present a new method for constructing a UEB from a quantum Latin square equipped with extra data. Developing construction techniques for UEBs has been a major activity in quantum computation, with three primary methods proposed: shift-and-multiply, Hadamard, and algebraic. We show that our new approach simultaneously generalizes the shift-and-multiply and Hadamard methods. Furthermore, we explicitly construct a UEB using our technique which we prove cannot be obtained from any of these existing methods.

  13. Improving Memory Error Handling Using Linux

    SciTech Connect (OSTI)

    Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-25T23:59:59.000Z

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  14. Systematic Errors in measurement of b1

    SciTech Connect (OSTI)

    Wood, S A

    2014-10-27T23:59:59.000Z

    A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b(1) structure function at Jefferson Lab.

  15. Message passing in fault tolerant quantum error correction

    E-Print Network [OSTI]

    Z. W. E. Evans; A. M. Stephens

    2008-06-13T23:59:59.000Z

    Inspired by Knill's scheme for message passing error detection, here we develop a scheme for message passing error correction for the nine-qubit Bacon-Shor code. We show that for two levels of concatenated error correction, where classical information obtained at the first level is used to help interpret the syndrome at the second level, our scheme will correct all cases with four physical errors. This results in a reduction of the logical failure rate relative to conventional error correction by a factor proportional to the reciprocal of the physical error rate.

  16. A methodology integrating formal and informal software development

    E-Print Network [OSTI]

    a methodology integrating formal and informal soft- ware development. By distinguishing several dimensions Our methodology covers requirements engineering and logical system design. It is particularly suitedA methodology integrating formal and informal software development Barbara Paech Institut fur

  17. Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity

    E-Print Network [OSTI]

    1 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity of Photovoltaic Electricity #12;IEA-PVPS-TASK 12 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity INTERNATIONAL ENERGY AGENCY PHOTOVOLTAIC POWER SYSTEMS PROGRAMME Methodology

  18. Rain sampling device

    DOE Patents [OSTI]

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14T23:59:59.000Z

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  19. Rain sampling device

    DOE Patents [OSTI]

    Nelson, Danny A. (Richland, WA); Tomich, Stanley D. (Richland, WA); Glover, Donald W. (Prosser, WA); Allen, Errol V. (Benton City, WA); Hales, Jeremy M. (Kennewick, WA); Dana, Marshall T. (Richland, WA)

    1991-01-01T23:59:59.000Z

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  20. Examination of radioactive decay methodology in the HASCAL code

    SciTech Connect (OSTI)

    Steffler, R.S. [Texas A and M Univ., College Station, TX (United States). Dept. of Nuclear Engineering; Ryman, J.C.; Gehin, J.C.; Worley, B.A. [Oak Ridge National Lab., TN (United States)

    1998-01-01T23:59:59.000Z

    The HASCAL 2.0 code provides dose estimates for nuclear, chemical, and biological facility accident and terrorist weapon strike scenarios. In the analysis of accidents involving radioactive material, an approximate method is used to account for decay during transport. Rather than perform the nuclide decay during the atmospheric transport calculation, the decay is performed a priori and a table look up method is used during the transport of a depositing tracer particle and non depositing (gaseous) tracer particle. In order to investigate the accuracy of this decay methodology two decay models were created using the ORIGEN2 computer program. The first is a HASCAL like model that treats decay and growth of all nuclide explicitly over the time interval specified for atmospheric transport, but does not change the relative mix of depositing and non-depositing nuclides due to deposition to the ground, nor does it treat resuspension. The second model explicitly includes resuspension as well as separate decay of the nuclides in the atmosphere and on the ground at each deposition time step. For simplicity, both of these models uses a one-dimensional layer model for the atmospheric transport. An additional investigation was performed to determine the accuracy of the HASCAL like model in separately following Cs-137 and I-131. The results from this study show that the HASCAL decay model compares closely with the more rigorous model with the computed doses are generally within one percent (maximum error of 7 percent) over 48 hours following the release. The models showed no difference for Cs-137 and a maximum error of 2.5 percent for I-131 over the 96 hours following release.

  1. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect (OSTI)

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16T23:59:59.000Z

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  2. A Comparative Study into Architecture-Based Safety Evaluation Methodologies using AADL's Error Annex and Failure Propagation Models

    E-Print Network [OSTI]

    Han, Jun

    and Effects Analysis (FMEA) [25] are used to create evidence that the system fulfils its safety requirements design phase) are used to automatically produce Fault Trees and FMEA tables based on an architecture

  3. New Methodologies for Analysis of Premixed Charge Compression...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    New Methodologies for Analysis of Premixed Charge Compression Ignition Engines New Methodologies for Analysis of Premixed Charge Compression Ignition Engines Presentation given at...

  4. Modeling of Diesel Exhaust Systems: A methodology to better simulate...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Discussed...

  5. aij projects methodology: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Last Page Topic Index 1 A New Project Execution Methodology; Integrating Project Management Principles with Quality Project Execution Methodologies University of Kansas - KU...

  6. STEPS: A Grid Search Methodology for Optimized Peptide Identification...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Grid Search Methodology for Optimized Peptide Identification Filtering of MSMS Database Search Results. STEPS: A Grid Search Methodology for Optimized Peptide Identification...

  7. Methodology for Carbon Accounting of Grouped Mosaic and Landscape...

    Open Energy Info (EERE)

    Methodology for Carbon Accounting of Grouped Mosaic and Landscape-scale REDD Projects Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Methodology for Carbon Accounting...

  8. Particle Measurement Methodology: Comparison of On-road and Lab...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size...

  9. Application of Random Vibration Theory Methodology for Seismic...

    Office of Environmental Management (EM)

    Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis Application of Random Vibration Theory Methodology for Seismic Soil-Structure...

  10. Barr Engineering Statement of Methodology Rosemount Wind Turbine...

    Office of Environmental Management (EM)

    Barr Engineering Statement of Methodology Rosemount Wind Turbine Simulations by Truescape Visual Reality, DOEEA-1791 (May 2010) Barr Engineering Statement of Methodology Rosemount...

  11. COMPUTER SCIENCE SAMPLE PROGRAM

    E-Print Network [OSTI]

    Gering, Jon C.

    COMPUTER SCIENCE SAMPLE PROGRAM (First Math Course MATH 198) This sample program suggests one way CS 181: Foundations of Computer Science II CS 180: Foundations of Computer Science I CS 191

  12. Quantum Error Correcting Subsystem Codes From Two Classical Linear Codes

    E-Print Network [OSTI]

    Dave Bacon; Andrea Casaccino

    2006-10-17T23:59:59.000Z

    The essential insight of quantum error correction was that quantum information can be protected by suitably encoding this quantum information across multiple independently erred quantum systems. Recently it was realized that, since the most general method for encoding quantum information is to encode it into a subsystem, there exists a novel form of quantum error correction beyond the traditional quantum error correcting subspace codes. These new quantum error correcting subsystem codes differ from subspace codes in that their quantum correcting routines can be considerably simpler than related subspace codes. Here we present a class of quantum error correcting subsystem codes constructed from two classical linear codes. These codes are the subsystem versions of the quantum error correcting subspace codes which are generalizations of Shor's original quantum error correcting subspace codes. For every Shor-type code, the codes we present give a considerable savings in the number of stabilizer measurements needed in their error recovery routines.

  13. Reply To "Comment on 'Quantum Convolutional Error-Correcting Codes' "

    E-Print Network [OSTI]

    H. F. Chau

    2005-06-02T23:59:59.000Z

    In their comment, de Almedia and Palazzo \\cite{comment} discovered an error in my earlier paper concerning the construction of quantum convolutional codes (quant-ph/9712029). This error can be repaired by modifying the method of code construction.

  14. Evolved Error Management Biases in the Attribution of Anger

    E-Print Network [OSTI]

    Galperin, Andrew

    2012-01-01T23:59:59.000Z

    von Hippel, W. , Poore, J. C. , Buss, D. M. , et al. (under27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error

  15. Critical infrastructure systems of systems assessment methodology.

    SciTech Connect (OSTI)

    Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

    2006-10-01T23:59:59.000Z

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

  16. Clustered Error Correction of Codeword-Stabilized Quantum Codes

    E-Print Network [OSTI]

    Yunfan Li; Ilya Dumer; Leonid P. Pryadko

    2010-03-08T23:59:59.000Z

    Codeword stabilized (CWS) codes are a general class of quantum codes that includes stabilizer codes and many families of non-additive codes with good parameters. For such a non-additive code correcting all t-qubit errors, we propose an algorithm that employs a single measurement to test all errors located on a given set of t qubits. Compared with exhaustive error screening, this reduces the total number of measurements required for error recovery by a factor of about 3^t.

  17. Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications

    E-Print Network [OSTI]

    Garcia, Tanya

    2012-10-19T23:59:59.000Z

    to the models considered in Tsiatis and Ma (2004), our model is less stringent because it allows an unspecified model error distribution and unspecified covariate distribution, not just the latter. With an unspecified model error distribution, the RMM... with measurement error is a very different problem compared to the model considered in Tsiatis and Ma (2004), where the model error distribution has a known parametric form. Consequently, the semiparamet- ric treatment here is also drastically different. Our...

  18. Error Analysis in Nuclear Density Functional Theory

    E-Print Network [OSTI]

    Nicolas Schunck; Jordan D. McDonnell; Jason Sarich; Stefan M. Wild; Dave Higdon

    2014-07-11T23:59:59.000Z

    Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.

  19. Franklin Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) Environmental Assessments (EA)Budget(DANCE) TargetFormsTrouble Shooting and Error

  20. Edison Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisitingContract ManagementDiscoveringESnet UpdateEarthTrouble Shooting and Error

  1. Susceptibility of Commodity Systems and Software to Memory Soft Errors

    E-Print Network [OSTI]

    Riska, Alma

    Susceptibility of Commodity Systems and Software to Memory Soft Errors Alan Messer, Member, IEEE Abstract--It is widely understood that most system downtime is acounted for by programming errors transient errors in computer system hardware due to external factors, such as cosmic rays. This work

  2. A Taxonomy of Number Entry Error Sarah Wiseman

    E-Print Network [OSTI]

    Cairns, Paul

    A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 350 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how

  3. A Taxonomy of Number Entry Error Sarah Wiseman

    E-Print Network [OSTI]

    Subramanian, Sriram

    A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how

  4. Predictors of Threat and Error Management: Identification of Core

    E-Print Network [OSTI]

    Predictors of Threat and Error Management: Identification of Core Nontechnical Skills In normal flight operations, crews are faced with a variety of external threats and commit a range of errors of these threats and errors therefore forms an essential element of enhancing performance and minimizing risk

  5. Error rate and power dissipation in nano-logic devices

    E-Print Network [OSTI]

    Kim, Jong Un

    2004-01-01T23:59:59.000Z

    Current-controlled logic and single electron logic processors have been investigated with respect to thermal-induced bit error. A maximal error rate for both logic processors is regarded as one bit-error/year/chip. A maximal clock frequency...

  6. Bolstered Error Estimation Ulisses Braga-Neto a,c

    E-Print Network [OSTI]

    Braga-Neto, Ulisses

    the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian

  7. A New Project Execution Methodology; Integrating Project Management Principles with Quality Project Execution Methodologies

    E-Print Network [OSTI]

    Schriner, Jesse J.

    2008-07-25T23:59:59.000Z

    Approach ........................................................................................3 The ITIL Approach ..................................................................................................5 Quality Project Methodologies Summary.... 2006. Six Sigma for IT Management. Van Haren Publishing. The main purpose of this book is to both introduce Six Sigma and Information Technology Infrastructure Library (ITIL) and then integrate the two methodologies for application...

  8. Software development methodology for high consequence systems

    SciTech Connect (OSTI)

    Baca, L.S.; Bouchard, J.F.; Collins, E.W.; Eisenhour, M.; Neidigk, D.D.; Shortencarier, M.J.; Trellue, P.A.

    1997-10-01T23:59:59.000Z

    This document describes a Software Development Methodology for High Consequence Systems. A High Consequence System is a system whose failure could lead to serious injury, loss of life, destruction of valuable resources, unauthorized use, damaged reputation or loss of credibility or compromise of protected information. This methodology can be scaled for use in projects of any size and complexity and does not prescribe any specific software engineering technology. Tasks are described that ensure software is developed in a controlled environment. The effort needed to complete the tasks will vary according to the size, complexity, and risks of the project. The emphasis of this methodology is on obtaining the desired attributes for each individual High Consequence System.

  9. Turbulent mixing in ducts, theory and experiment application to aerosol single point sampling

    E-Print Network [OSTI]

    Langari, Abdolreza

    1997-01-01T23:59:59.000Z

    The Environmental Protection Agency (EPA) has announced rules for continuous emissions monitoring (CEM) of stacks and ducts in nuclear facilities. EPA has recently approved use of Alternative Reference Methodologies (ARM) for air sampling in nuclear...

  10. Systematic Comparison of Operating Reserve Methodologies: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Krad, I.; Ela, E.

    2014-04-01T23:59:59.000Z

    Operating reserve requirements are a key component of modern power systems, and they contribute to maintaining reliable operations with minimum economic impact. No universal method exists for determining reserve requirements, thus there is a need for a thorough study and performance comparison of the different existing methodologies. Increasing penetrations of variable generation (VG) on electric power systems are posed to increase system uncertainty and variability, thus the need for additional reserve also increases. This paper presents background information on operating reserve and its relationship to VG. A consistent comparison of three methodologies to calculate regulating and flexibility reserve in systems with VG is performed.

  11. Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics 

    E-Print Network [OSTI]

    Zollanvari, Amin

    2012-02-14T23:59:59.000Z

    , Aniruddha Datta Guy L. Curry Head of Department, Costas N. Georghiades December 2010 Major Subject: Electrical Engineering iii ABSTRACT Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133 x LIST OF TABLES TABLE Page I Minimum sample size, n, (n0 = n1 = n) for desired (n;0:5) in univariate case. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 II Genes selected using the validity-goodness model selection...

  12. Bounds on Mutual Information Rates of Noisy Channels with Timing Errors

    E-Print Network [OSTI]

    Kavcic, Aleksandar

    of this information rate (and its supremum) is still an open problem. In this paper, we study a more general case than-th sample at the receiver, Yi = Y (iT + Ei) = + k=- Xk · h(iT - kT + Ei) + Ni = i+q+ Ei T k=i-q+ Ei T Xk · h(iT - kT + Ei) + Ni, (3) where Ei is the timing error. For simplicity, we shall assume that Ni

  13. Technological Advancements and Error Rates in Radiation Therapy Delivery

    SciTech Connect (OSTI)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15T23:59:59.000Z

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.

  14. Locked modes and magnetic field errors in MST

    SciTech Connect (OSTI)

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01T23:59:59.000Z

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.

  15. Determining mutant spectra of three RNA viral samples using ultra-deep sequencing

    SciTech Connect (OSTI)

    Chen, H

    2012-06-06T23:59:59.000Z

    RNA viruses have extremely high mutation rates that enable the virus to adapt to new host environments and even jump from one species to another. As part of a viral transmission study, three viral samples collected from naturally infected animals were sequenced using Illumina paired-end technology at ultra-deep coverage. In order to determine the mutant spectra within the viral quasispecies, it is critical to understand the sequencing error rates and control for false positive calls of viral variants (point mutantations). I will estimate the sequencing error rate from two control sequences and characterize the mutant spectra in the natural samples with this error rate.

  16. Supervised classification of microbiota mitigates mislabeling errors

    E-Print Network [OSTI]

    Kelley, Scott

    of DNA sequencing technologies and concomitant advances in bioinfor- matics methods are revolutionizing to be useless, but what if only a few of the labels are wrong? After intentionally mislabeling samples

  17. A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN

    SciTech Connect (OSTI)

    Shine, G.

    2014-11-25T23:59:59.000Z

    The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple mean of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.

  18. Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks

    E-Print Network [OSTI]

    Li, Ruoxin

    2010-01-01T23:59:59.000Z

    We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.

  19. Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks

    E-Print Network [OSTI]

    Ruoxin Li; Kurt Gibble

    2010-08-09T23:59:59.000Z

    We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.

  20. In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors

    E-Print Network [OSTI]

    Przasnyski, Zbigniew; Seal, Kala Chand

    2011-01-01T23:59:59.000Z

    Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.

  1. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect (OSTI)

    Beckerman, M.; Jones, J.P.

    1999-02-01T23:59:59.000Z

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  2. E791 DATA ACQUISITION SYSTEM Error reports received ; no new errors reported

    E-Print Network [OSTI]

    Fermi National Accelerator Laboratory

    of events written to tape. 18 #12; Error and Status Displays Mailbox For Histogram Requests Vax­online Event Display VAX 11 / 780 Event Reconstruction Event Display Detector Monitoring 3 VAX Workstations 42 EXABYTE of the entire E791 DA system. The VAX 11/780 was the user interface to the VME part of the system, via the DA

  3. METHODOLOGY. TIME DISTRIBUTION OF MOSSBAUER SCATTERED RADIATION

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    METHODOLOGY. TIME DISTRIBUTION OF MOSSBAUER SCATTERED RADIATION H. DROST, K. PALOW and G. WEYER distribution en temps du rayonnement reemis par un absorbant Mossbauer. Des effets d'interference dus a l la reponse de 1'absorbant. Les mesures ont et6 realisees avec le rayonnement Mossbauer a 14.4 keV du

  4. ORNL/CON-496 METHODOLOGY FOR RECALCULATING

    E-Print Network [OSTI]

    Oak Ridge National Laboratory

    ORNL/CON-496 METHODOLOGY FOR RECALCULATING AND VERIFYING SAVINGS ACHIEVED BY THE SUPER ESPC PROGRAM for Recalculating and Verifying Savings Achieved by the Super ESPC Program Martin Schweitzer John A. Shonder Patrick.....................................................................................................................2 2. DESCRIPTION OF SUPER ESPC PROGRAM ......................................................... 3 2

  5. A Methodology Database System Performance Evaluation

    E-Print Network [OSTI]

    Liblit, Ben

    A Methodology for Database System Performance Evaluation Haran Boral Computer Science Department Technion - Israel Institute of Technology David J. DeWitt Computer Sciences Department University82-01870 and the Department of Energy under contract #DE-AC02-81ER10920. #12;ABSTRACT This paper

  6. Methodology Water Harvesting Measurements with Biomimetic

    E-Print Network [OSTI]

    Barthelat, Francois

    Methodology Water Harvesting Measurements with Biomimetic Surfaces Zi Jun Wang and Prof. Anne parameters that affect the water harvesting efficiencies of different surfaces · Optimize the experimental Objectives Water is one of the most essential natural resources. The easy accessibility of water

  7. Case Study/ Ground Water Sustainability: Methodology and

    E-Print Network [OSTI]

    Zheng, Chunmiao

    , or the lack thereof, of ground water flow systems driven by similar hydrogeologic and economic conditionsCase Study/ Ground Water Sustainability: Methodology and Application to the North China Plain of a ground water flow system in the North China Plain (NCP) subject to severe overexploitation and rapid

  8. Methodology in Biological Game Simon M. Huttegger

    E-Print Network [OSTI]

    Zollman, Kevin

    ;Huttegger and Zollman Methodology in Biological Game Theory ESS Method Describe a game Find all the stable states (ESS) If there is only one, conclude this one is evolutionarily significant #12;Huttegger An Evolutionarily Stable Strategy (ESS) Pooling equilibrium Not an ESS Hybrid equilibrium Not an ESS #12;Huttegger

  9. Optimization Material Distribution methodology: Some electromagnetic examples

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    730 1 Optimization Material Distribution methodology: Some electromagnetic examples P. Boissoles, H. Ben Ahmed, M. Pierre, B. Multon Abstract--In this paper, a new approach towards Optimization Material to be highly adaptive to various kinds of electromagnetic actuator optimization approaches. Several optimal

  10. Methodology and Process for Condition Assessment at Existing Hydropower Plants

    SciTech Connect (OSTI)

    Zhang, Qin Fen [ORNL] [ORNL; Smith, Brennan T [ORNL] [ORNL; Cones, Marvin [Mesa Associates, Inc.] [Mesa Associates, Inc.; March, Patrick [Hydro Performance Processes, Inc.] [Hydro Performance Processes, Inc.; Dham, Rajesh [U.S. Department of Energy] [U.S. Department of Energy; Spray, Michael [New West Technologies, LLC.] [New West Technologies, LLC.

    2012-01-01T23:59:59.000Z

    Hydropower Advancement Project was initiated by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy to develop and implement a systematic process with a standard methodology to identify the opportunities of performance improvement at existing hydropower facilities and to predict and trend the overall condition and improvement opportunity within the U.S. hydropower fleet. The concept of performance for the HAP focuses on water use efficiency how well a plant or individual unit converts potential energy to electrical energy over a long-term averaging period of a year or more. The performance improvement involves not only optimization of plant dispatch and scheduling but also enhancement of efficiency and availability through advanced technology and asset upgrades, and thus requires inspection and condition assessment for equipment, control system, and other generating assets. This paper discusses the standard methodology and process for condition assessment of approximately 50 nationwide facilities, including sampling techniques to ensure valid expansion of the 50 assessment results to the entire hydropower fleet. The application and refining process and the results from three demonstration assessments are also presented in this paper.

  11. Graphical Quantum Error-Correcting Codes

    E-Print Network [OSTI]

    Sixia Yu; Qing Chen; C. H. Oh

    2007-09-12T23:59:59.000Z

    We introduce a purely graph-theoretical object, namely the coding clique, to construct quantum errorcorrecting codes. Almost all quantum codes constructed so far are stabilizer (additive) codes and the construction of nonadditive codes, which are potentially more efficient, is not as well understood as that of stabilizer codes. Our graphical approach provides a unified and classical way to construct both stabilizer and nonadditive codes. In particular we have explicitly constructed the optimal ((10,24,3)) code and a family of 1-error detecting nonadditive codes with the highest encoding rate so far. In the case of stabilizer codes a thorough search becomes tangible and we have classified all the extremal stabilizer codes up to 8 qubits.

  12. Output error identification of hydrogenerator conduit dynamics

    SciTech Connect (OSTI)

    Vogt, M.A.; Wozniak, L. (Illinois Univ., Urbana, IL (USA)); Whittemore, T.R. (Bureau of Reclamation, Denver, CO (USA))

    1989-09-01T23:59:59.000Z

    Two output error model reference adaptive identifiers are considered for estimating the parameters in a reduced order gate position to pressure model for the hydrogenerator. This information may later be useful in an adaptive controller. Gradient and sensitivity functions identifiers are discussed for the hydroelectric application and connections are made between their structural differences and relative performance. Simulations are presented to support the conclusion that the latter algorithm is more robust, having better disturbance rejection and less plant model mismatch sensitivity. For identification from recorded plant data from step gate inputs, the other algorithm even fails to converge. A method for checking the estimated parameters is developed by relating the coefficients in the reduced order model to head, an externally measurable parameter.

  13. Quantum Error Correction with magnetic molecules

    E-Print Network [OSTI]

    José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas

    2014-08-22T23:59:59.000Z

    Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.

  14. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry

    SciTech Connect (OSTI)

    Brady, S. L.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children's Research Hospital, Memphis, Tennessee 38105 (United States)

    2012-06-15T23:59:59.000Z

    Purpose: The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by {approx}25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. Methods: The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Results: Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 {+-} 1.1 mV cGy{sup -1} versus the CT scatter phantom 29.2 {+-} 1.0 mV cGy{sup -1} and FIA with x-ray 29.9 {+-} 1.1 mV cGy{sup -1} methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of {approx}3000 mV. Conclusions: The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error {approx}12% will be reflected in the dosimetry results. The calibration process must emulate the eventual CT dosimetry process by matching or excluding scatter when calibrating the MOSFETs. Finally, the authors recommend that the MOSFETs are energy calibrated approximately every 2500-3000 mV.

  15. Theoretical analysis of reflected ray error from surface slope error and their application to the solar concentrated collector

    E-Print Network [OSTI]

    Huang, Weidong

    2011-01-01T23:59:59.000Z

    Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.

  16. IDENTIFICATION Your Sample Box

    E-Print Network [OSTI]

    Liskiewicz, Maciej

    to Virginia Tech Soil Testing Lab, 145 Smyth Hall (MC 0465), 185 Ag Quad Ln, Blacksburg VA 24061, in sturdy, K, Ca, Mg, Zn, Mn, Cu, Fe, B, and soluble salts) NoCharge $16.00 Organic Matter $4.00 $6.00 Fax with soil sample and form; make check or money order payable to "Treasurer, Virginia Tech." COST PER SAMPLE

  17. Sampling system and method

    DOE Patents [OSTI]

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16T23:59:59.000Z

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  18. Rehabilitation Services Sample Occupations

    E-Print Network [OSTI]

    Ronquist, Fredrik

    /Industries Correction Agencies Drug Treatment Centers Addiction Counselor Advocacy Occupations Art Therapist BehavioralRehabilitation Services Sample Occupations Sample Work Settings Child & Day Care Centers Clinics................................ IIB 29-1000 E4 Careers in Counseling and Human Services .........IIB 21-1010 C7 Careers in Health Care

  19. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, J.; Koren, Y.

    1993-08-24T23:59:59.000Z

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the confidence'' of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  20. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, Johann (Ann Arbor, MI); Koren, Yoram (Ann Arbor, MI)

    1993-08-24T23:59:59.000Z

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the "confidence" of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  1. Gross error detection in process data

    E-Print Network [OSTI]

    Singh, Gurmeet

    1992-01-01T23:59:59.000Z

    sufficient condition that the mixture is unimodal for all p is that 27 (7r rrs (Pl J2) & 4( z s) (III. 3) Fig. l and Fig. 2 illustrate some examples of a two component normal mixture distribution and clearly indicate the variety of shapes possible... standard deviation, s. If the distribution sample is N(p, cr), then 18 has the well known t-distribution with n-1 degrees of freedom, where n is the number of observations in the sample. On the basis of this fact, one can set up a test of hypothesis p...

  2. Waste classification sampling plan

    SciTech Connect (OSTI)

    Landsman, S.D.

    1998-05-27T23:59:59.000Z

    The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998.

  3. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16T23:59:59.000Z

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling, sample extraction, and analytical methods to be used in the INL-2 study. For each of the five test events, the specified floor of the INL building will be contaminated with BG using a point-release device located in the room specified in the experimental design. Then quality control (QC), reference material coupon (RMC), judgmental, and probabilistic samples will be collected according to the sampling plan for each test event. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples were selected with a random aspect and in sufficient numbers to provide desired confidence for detecting contamination or clearing uncontaminated (or decontaminated) areas. Following sample collection for a given test event, the INL building will be decontaminated. For possibly contaminated areas, the numbers of probabilistic samples were chosen to provide 95% confidence of detecting contaminated areas of specified sizes. For rooms that may be uncontaminated following a contamination event, or for whole floors after decontamination, the numbers of judgmental and probabilistic samples were chosen using the CJR approach. The numbers of samples were chosen to support making X%/Y% clearance statements with X = 95% or 99% and Y = 96% or 97%. The experimental and sampling design also provides for making X%/Y% clearance statements using only probabilistic samples. For each test event, the numbers of characterization and clearance samples were selected within limits based on operational considerations while still maintaining high confidence for detection and clearance aspects. The sampling design for all five test events contains 2085 samples, with 1142 after contamination and 943 after decontamination. These numbers include QC, RMC, judgmental, and probabilistic samples. The experimental and sampling design specified in this report provides a good statistical foundation for achieving the objectives of the INL-2 study.

  4. Quantum root-mean-square error and measurement uncertainty relations

    E-Print Network [OSTI]

    Paul Busch; Pekka Lahti; Reinhard F Werner

    2014-10-10T23:59:59.000Z

    Recent years have witnessed a controversy over Heisenberg's famous error-disturbance relation. Here we resolve the conflict by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. We discuss two approaches to adapting the classic notion of root-mean-square error to quantum measurements. One is based on the concept of noise operator; its natural operational content is that of a mean deviation of the values of two observables measured jointly, and thus its applicability is limited to cases where such joint measurements are available. The second error measure quantifies the differences between two probability distributions obtained in separate runs of measurements and is of unrestricted applicability. We show that there are no nontrivial unconditional joint-measurement bounds for {\\em state-dependent} errors in the conceptual framework discussed here, while Heisenberg-type measurement uncertainty relations for {\\em state-independent} errors have been proven.

  5. Deterministic treatment of model error in geophysical data assimilation

    E-Print Network [OSTI]

    Carrassi, Alberto

    2015-01-01T23:59:59.000Z

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  6. A two reservoir model of quantum error correction

    E-Print Network [OSTI]

    James P. Clemens; Julio Gea-Banacloche

    2005-08-22T23:59:59.000Z

    We consider a two reservoir model of quantum error correction with a hot bath causing errors in the qubits and a cold bath cooling the ancilla qubits to a fiducial state. We consider error correction protocols both with and without measurement of the ancilla state. The error correction acts as a kind of refrigeration process to maintain the data qubits in a low entropy state by periodically moving the entropy to the ancilla qubits and then to the cold reservoir. We quantify the performance of the error correction as a function of the reservoir temperatures and cooling rate by means of the fidelity and the residual entropy of the data qubits. We also make a comparison with the continuous quantum error correction model of Sarovar and Milburn [Phys. Rev. A 72 012306].

  7. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect (OSTI)

    Pevey, Ronald E.

    2005-09-15T23:59:59.000Z

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  8. Investigating surety methodologies for cognitive systems.

    SciTech Connect (OSTI)

    Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); Peercy, David Eugene; Mills, Kristy (University of New Mexico, Albuquerque, NM); Caldera, Eva (University of New Mexico, Albuquerque, NM)

    2006-11-01T23:59:59.000Z

    Advances in cognitive science provide a foundation for new tools that promise to advance human capabilities with significant positive impacts. As with any new technology breakthrough, associated technical and non-technical risks are involved. Sandia has mitigated both technical and non-technical risks by applying advanced surety methodologies in such areas as nuclear weapons, nuclear reactor safety, nuclear materials transport, and energy systems. In order to apply surety to the development of cognitive systems, we must understand the concepts and principles that characterize the certainty of a system's operation as well as the risk areas of cognitive sciences. This SAND report documents a preliminary spectrum of risks involved with cognitive sciences, and identifies some surety methodologies that can be applied to potentially mitigate such risks. Some potential areas for further study are recommended. In particular, a recommendation is made to develop a cognitive systems epistemology framework for more detailed study of these risk areas and applications of surety methods and techniques.

  9. Analysis Methodology for Industrial Load Profiles

    E-Print Network [OSTI]

    Reddoch, T. W.

    of potential for elcctricity conversion or im[Jroved cfficicncy in electricity usagc. EPRI's Electrotechnology Reference Guide [1] is an excellent place to bcgin an evaluation. Perhaps the single largest potential to ill1[Jrove electric encrgy utilization... Exhibit I: Methodology Flow Diagram 81 A variety of electrotechnologies exisl with a range of potential for electricity conversion or im["lroved efficiency in electricity usage. EPRI's Electrotechnology Reference Guide [1] is an excellent place to begin...

  10. A planning methodology for arterial streets

    E-Print Network [OSTI]

    Williams, Marc Daryl

    1991-01-01T23:59:59.000Z

    -of-Service Guidelines 4 Sensitivity of Characteristic Input Variables 24 5 Suggested Default Values for use with the Florida Planning Methodology . 25 6 Summary of Characteristic Variables and Operational Conditions 34 7 Comparison of Measured and Predicted Results... for Incremental v/c Ratios 14 Transportation and Development Land Use Cycle 18 General Analytical Format of the Florida Planning Procedure 27 Tabular LOS Output of the ART TAB Arterial Planning Program 28 Frequency of HCM Classifications Among Arterial...

  11. Trial application of a technique for human error analysis (ATHEANA)

    SciTech Connect (OSTI)

    Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Parry, G.W. [NUS, Gaithersburg, MD (United States)] [and others

    1996-10-01T23:59:59.000Z

    The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.

  12. Temperature-dependent errors in nuclear lattice simulations

    E-Print Network [OSTI]

    Dean Lee; Richard Thomson

    2007-01-17T23:59:59.000Z

    We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local "well-tempered" lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.

  13. Sample Changes and Issues

    Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]

    EIA-914 Survey and HPDI. Figure 2 shows how this could change apparent production. The blue line shows the reported sample production as it would normally be reported under the...

  14. Water Sample Concentrator

    ScienceCinema (OSTI)

    Idaho National Laboratory

    2010-01-08T23:59:59.000Z

    Automated portable device that concentrates and packages a sample of suspected contaminated water for safe, efficient transport to a qualified analytical laboratory. This technology will help safeguard against pathogen contamination or chemical and biolog

  15. Dissolution actuated sample container

    DOE Patents [OSTI]

    Nance, Thomas A.; McCoy, Frank T.

    2013-03-26T23:59:59.000Z

    A sample collection vial and process of using a vial is provided. The sample collection vial has an opening secured by a dissolvable plug. When dissolved, liquids may enter into the interior of the collection vial passing along one or more edges of a dissolvable blocking member. As the blocking member is dissolved, a spring actuated closure is directed towards the opening of the vial which, when engaged, secures the vial contents against loss or contamination.

  16. SAMPLING AND ANALYSIS PROTOCOLS

    SciTech Connect (OSTI)

    Jannik, T; P Fledderman, P

    2007-02-09T23:59:59.000Z

    Radiological sampling and analyses are performed to collect data for a variety of specific reasons covering a wide range of projects. These activities include: Effluent monitoring; Environmental surveillance; Emergency response; Routine ambient monitoring; Background assessments; Nuclear license termination; Remediation; Deactivation and decommissioning (D&D); and Waste management. In this chapter, effluent monitoring and environmental surveillance programs at nuclear operating facilities and radiological sampling and analysis plans for remediation and D&D activities will be discussed.

  17. TANK 5 SAMPLING

    SciTech Connect (OSTI)

    Vrettos, N; William Cheng, W; Thomas Nance, T

    2007-11-26T23:59:59.000Z

    Tank 5 at the Savannah River Site has been used to store high level waste and is currently undergoing waste removal processes in preparation for tank closure. Samples were taken from two locations to determine the contents in support of Documented Safety Analysis (DSA) development for chemical cleaning. These samples were obtained through the use of the Drop Core Sampler and the Snowbank Sampler developed by the Engineered Equipment & Systems (EES) group of the Savannah River National Laboratory (SRNL).

  18. Error estimates for the Euler discretization of an optimal control ...

    E-Print Network [OSTI]

    Joseph Frédéric Bonnans

    2014-12-10T23:59:59.000Z

    Dec 10, 2014 ... Abstract: We study the error introduced in the solution of an optimal control problem with first order state constraints, for which the trajectories ...

  19. Cosmic Ray Spectral Deformation Caused by Energy Determination Errors

    E-Print Network [OSTI]

    Per Carlson; Conny Wannemark

    2005-05-10T23:59:59.000Z

    Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.

  20. Optimized Learning with Bounded Error for Feedforward Neural Networks

    E-Print Network [OSTI]

    Maggiore, Manfredi

    Optimized Learning with Bounded Error for Feedforward Neural Networks A. Alessandri, M. Sanguineti-based learnings. A. Alessandri is with the Naval Automatio

  1. New Fractional Error Bounds for Polynomial Systems with ...

    E-Print Network [OSTI]

    2014-07-27T23:59:59.000Z

    Our major result extends the existing error bounds from the system involving only a ... linear complementarity systems with polynomial data as well as high-order ...

  2. Identification of toroidal field errors in a modified betatron accelerator

    SciTech Connect (OSTI)

    Loschialpo, P. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States)); Marsh, S.J. (SFA Inc., Landover, Maryland 20785 (United States)); Len, L.K.; Smith, T. (FM Technologies Inc., 10529-B Braddock Road, Fairfax, Virginia 22032 (United States)); Kapetanakos, C.A. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States))

    1993-06-01T23:59:59.000Z

    A newly developed probe, having a 0.05% resolution, has been used to detect errors in the toroidal magnetic field of the NRL modified betatron accelerator. Measurements indicate that the radial field components (errors) are 0.1%--1% of the applied toroidal field. Such errors, in the typically 5 kG toroidal field, can excite resonances which drive the beam to the wall. Two sources of detected field errors are discussed. The first is due to the discrete nature of the 12 single turn coils which generate the toroidal field. Both measurements and computer calculations indicate that its amplitude varies from 0% to 0.2% as a function of radius. Displacement of the outer leg of one of the toroidal field coils by a few millimeters has a significant effect on the amplitude of this field error. Because of uniform toroidal periodicity of these coils this error is a good suspect for causing the excitation of the damaging [ital l]=12 resonance seen in our experiments. The other source of field error is due to the current feed gaps in the vertical magnetic field coils. A magnetic field is induced inside the vertical field coils' conductor in the opposite direction of the applied toroidal field. Fringe fields at the gaps lead to additional field errors which have been measured as large as 1.0%. This source of field error, which exists at five toroidal locations around the modified betatron, can excite several integer resonances, including the [ital l]=12 mode.

  3. Homological Error Correction: Classical and Quantum Codes

    E-Print Network [OSTI]

    H. Bombin; M. A. Martin-Delgado

    2006-05-10T23:59:59.000Z

    We prove several theorems characterizing the existence of homological error correction codes both classically and quantumly. Not every classical code is homological, but we find a family of classical homological codes saturating the Hamming bound. In the quantum case, we show that for non-orientable surfaces it is impossible to construct homological codes based on qudits of dimension $D>2$, while for orientable surfaces with boundaries it is possible to construct them for arbitrary dimension $D$. We give a method to obtain planar homological codes based on the construction of quantum codes on compact surfaces without boundaries. We show how the original Shor's 9-qubit code can be visualized as a homological quantum code. We study the problem of constructing quantum codes with optimal encoding rate. In the particular case of toric codes we construct an optimal family and give an explicit proof of its optimality. For homological quantum codes on surfaces of arbitrary genus we also construct a family of codes asymptotically attaining the maximum possible encoding rate. We provide the tools of homology group theory for graphs embedded on surfaces in a self-contained manner.

  4. A technique for human error analysis (ATHEANA)

    SciTech Connect (OSTI)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01T23:59:59.000Z

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  5. Liquid sampling system

    DOE Patents [OSTI]

    Larson, L.L.

    1984-09-17T23:59:59.000Z

    A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed. 5 figs.

  6. Liquid sampling system

    DOE Patents [OSTI]

    Larson, Loren L. (Idaho Falls, ID)

    1987-01-01T23:59:59.000Z

    A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed.

  7. PreparationSampleGuide:StartQuickISX Sample Preparation Guide

    E-Print Network [OSTI]

    straining the sample through a 70 micron nylon mesh strainer. If sample aggregation is a problem, we suggest

  8. Novel Optimization Methodology for Welding Process/Consumable Integration

    SciTech Connect (OSTI)

    Quintana, Marie A; DebRoy, Tarasankar; Vitek, John; Babu, Suresh

    2006-01-15T23:59:59.000Z

    Advanced materials are being developed to improve the energy efficiency of many industries of future including steel, mining, and chemical, as well as, US infrastructures including bridges, pipelines and buildings. Effective deployment of these materials is highly dependent upon the development of arc welding technology. Traditional welding technology development is slow and often involves expensive and time-consuming trial and error experimentation. The reason for this is the lack of useful predictive tools that enable welding technology development to keep pace with the deployment of new materials in various industrial sectors. Literature reviews showed two kinds of modeling activities. Academic and national laboratory efforts focus on developing integrated weld process models by employing the detailed scientific methodologies. However, these models are cumbersome and not easy to use. Therefore, these scientific models have limited application in real-world industrial conditions. On the other hand, industrial users have relied on simple predictive models based on analytical and empirical equations to drive their product development. The scopes of these simple models are limited. In this research, attempts were made to bridge this gap and provide the industry with a computational tool that combines the advantages of both approaches. This research resulted in the development of predictive tools which facilitate the development of optimized welding processes and consumables. The work demonstrated that it is possible to develop hybrid integrated models for relating the weld metal composition and process parameters to the performance of welds. In addition, these tools can be deployed for industrial users through user friendly graphical interface. In principle, the welding industry users can use these modular tools to guide their welding process parameter and consumable composition selection. It is hypothesized that by expanding these tools throughout welding industry, substantial energy savings can be made. Savings are expected to be even greater in the case of new steels, which will require extensive mapping over large experimental ranges of parameters such as voltage, current, speed, heat input and pre-heat.

  9. A methodology for forecasting carbon dioxide flooding performance

    E-Print Network [OSTI]

    Marroquin Cabrera, Juan Carlos

    1998-01-01T23:59:59.000Z

    A methodology was developed for forecasting carbon dioxide (CO2) flooding performance quickly and reliably. The feasibility of carbon dioxide flooding in the Dollarhide Clearfork "AB" Unit was evaluated using the methodology. This technique is very...

  10. Integrated Scenario-based Design Methodology for Collaborative Technology Innovation

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Integrated Scenario-based Design Methodology for Collaborative Technology Innovation Fabrice Forest information technology innovation with an end-to-end Human and Social Sciences assistance. This methodology Technological innovation often requires large scale collaborative partnership between many heterogeneous

  11. advanced diagnostic methodology: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Ecology Websites Summary: 1 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity Report IEA-PVPS T12-03:2011 12;IEA-PVPS-TASK 12 Methodology...

  12. Fluid sampling system

    DOE Patents [OSTI]

    Houck, E.D.

    1994-10-11T23:59:59.000Z

    An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to be decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank. 4 figs.

  13. Fluid sampling system

    DOE Patents [OSTI]

    Houck, Edward D. (Idaho Falls, ID)

    1994-01-01T23:59:59.000Z

    An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank.

  14. Laboratory and field-scale test methodology for reliable characterization of solidified/stabilized hazardous wastes

    SciTech Connect (OSTI)

    Gray, K.E.; Holder, J. [Univ. of Texas, Austin, TX (United States). Center for Earth Sciences and Engineering; Mollah, M.Y.A.; Hess, T.R.; Vempati, R.K.; Cocke, D.L. [Lamar Univ., Beaumont, TX (United States)

    1995-12-31T23:59:59.000Z

    A methodology for flow through leach testing is proposed and discussed and preliminary testing using strontium doped cement based S/S samples is presented. The complementary and necessary characterization of the S/S matrix before and after testing is discussed and placed in perspective to the total evaluation of the laboratory-field scale leach testing for predicting long term performance and S/S technology design and improvement.

  15. Viscous sludge sample collector

    DOE Patents [OSTI]

    Beitel, George A [Richland, WA

    1983-01-01T23:59:59.000Z

    A vertical core sample collection system for viscous sludge. A sample tube's upper end has a flange and is attached to a piston. The tube and piston are located in the upper end of a bore in a housing. The bore's lower end leads outside the housing and has an inwardly extending rim. Compressed gas, from a storage cylinder, is quickly introduced into the bore's upper end to rapidly accelerate the piston and tube down the bore. The lower end of the tube has a high sludge entering velocity to obtain a full-length sludge sample without disturbing strata detail. The tube's downward motion is stopped when its upper end flange impacts against the bore's lower end inwardly extending rim.

  16. Experimental Scattershot Boson Sampling

    E-Print Network [OSTI]

    Marco Bentivegna; Nicolò Spagnolo; Chiara Vitelli; Fulvio Flamini; Niko Viggianiello; Ludovico Latmiral; Paolo Mataloni; Daniel J. Brod; Ernesto F. Galvão; Andrea Crespi; Roberta Ramponi; Roberto Osellame; Fabio Sciarrino

    2015-05-14T23:59:59.000Z

    Boson Sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialised quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, Scattershot Boson Sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric downconversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. Here we report the first Scattershot Boson Sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We employ recently proposed statistical tools to analyse our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy.

  17. ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK

    E-Print Network [OSTI]

    Ellis, Dan

    ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK Manuel J. Reyes. This structure reduces the error rate on the Aurora 2 noisy English digits task by more than 50% compared development of tandem systems showed an improvement in the performance on the Aurora task [2] of these systems

  18. Numerical Construction of Likelihood Distributions and the Propagation of Errors

    E-Print Network [OSTI]

    J. Swain; L. Taylor

    1997-12-12T23:59:59.000Z

    The standard method for the propagation of errors, based on a Taylor series expansion, is approximate and frequently inadequate for realistic problems. A simple and generic technique is described in which the likelihood is constructed numerically, thereby greatly facilitating the propagation of errors.

  19. Calibration and Error in Placental Molecular Clocks: A Conservative

    E-Print Network [OSTI]

    Hadly, Elizabeth

    Calibration and Error in Placental Molecular Clocks: A Conservative Approach Using for calibrating both mitogenomic and nucleogenomic placental timescales. We applied these reestimates to the most calibration error may inflate the power of the molecular clock when testing the time of ordinal

  20. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    E-Print Network [OSTI]

    Bai, Zhaojun

    gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance for integrated groundwater models, which are implicitly coupled to another model, such as surface water models the correspondence between the residual error in the preconditioned linear system and the solution error. Using

  1. PROPAGATION OF ERRORS IN SPATIAL ANALYSIS Peter P. Siska

    E-Print Network [OSTI]

    Hung, I-Kuai

    , the conversion of data from analog to digital form used to be an extremely time-consuming process. At present process then the resulting error is inflated up to 20 percent for each grid cell of the final map. The magnitude of errors naturally increases with an addition of every new layer entering the overlay process

  2. Error detection through consistency checking Peng Gong* Lan Mu#

    E-Print Network [OSTI]

    Silver, Whendee

    Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers

  3. Mutual information, bit error rate and security in Wójcik's scheme

    E-Print Network [OSTI]

    Zhanjun Zhang

    2004-02-21T23:59:59.000Z

    In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.

  4. Uniform and optimal error estimates of an exponential wave ...

    E-Print Network [OSTI]

    2014-05-01T23:59:59.000Z

    of the error propagation, cut-off of the nonlinearity, and the energy method. ...... gives Lemma 3.4 for the local truncation error, which is of spectral order in ... estimates, we adopt a strategy similar to the finite difference method [4] (cf. diagram.

  5. Mining API Error-Handling Specifications from Source Code

    E-Print Network [OSTI]

    Xie, Tao

    Mining API Error-Handling Specifications from Source Code Mithun Acharya and Tao Xie Department it difficult to mine error-handling specifications through manual inspection of source code. In this paper, we, without any user in- put. In our framework, we adapt a trace generation technique to distinguish

  6. Entanglement and Quantum Error Correction with Superconducting Qubits

    E-Print Network [OSTI]

    Entanglement and Quantum Error Correction with Superconducting Qubits A Dissertation Presented David Reed All rights reserved. #12;Entanglement and Quantum Error Correction with Superconducting is to use superconducting quantum bits in the circuit quantum electro- dynamics (cQED) architecture. There

  7. ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error

    E-Print Network [OSTI]

    Richardson, David

    may not even exist. For this reason we investigate error detection and recovery (EDR) strategies. We may not even exist. For this reason we investigate error detection and recovery (EDR ) strategies. We and implementational questions remain. The second contribution is a formal, geometric approach to EDR. While EDR

  8. Stewart and Khosla: The Chimera Methodology 1 FINAL DRAFT

    E-Print Network [OSTI]

    Stewart and Khosla: The Chimera Methodology 1 FINAL DRAFT THE CHIMERA METHODOLOGY: DESIGNING 15213 pkk@ri.cmu.edu Abstract: The Chimera Methodology is a software engineering paradigm that enables the objects have been developed and in­ corporated into the Chimera Real­Time Operating System. Techniques

  9. Net Environmental Benefit Analysis: A New Assessment Methodology

    E-Print Network [OSTI]

    Net Environmental Benefit Analysis: A New Assessment Methodology R. A. Efroymson, efroymsonra.S. Department of Energy Dec-05 Net Environmental Benefit Analysis: A New Assessment Methodology R. A. Efroymson environmental assessment methodologies such as risk assessment, by explicitly considering benefits (not just

  10. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    SciTech Connect (OSTI)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk [Department of Mathematics, Royal Holloway University of London, Egham TW20 0EX (United Kingdom); Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent (Belgium); Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com [Física Teòrica: Informació i Fenomens Quàntics, Universitat Autònoma de Barcelona, ES-08193 Bellaterra, Barcelona (Spain); Mathematical Institute, Budapest University of Technology and Economics, Egry József u 1., Budapest 1111 (Hungary)

    2014-10-15T23:59:59.000Z

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states ?{sub 1}, …, ?{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(?{sub 1}, …, ?{sub r}), as recently introduced by Nussbaum and Szko?a in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j

  11. Grid-scale Fluctuations and Forecast Error in Wind Power

    E-Print Network [OSTI]

    G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi

    2015-03-29T23:59:59.000Z

    The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).

  12. Grid-scale Fluctuations and Forecast Error in Wind Power

    E-Print Network [OSTI]

    Bel, G; Toots, M; Bandi, M M

    2015-01-01T23:59:59.000Z

    The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).

  13. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    SciTech Connect (OSTI)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01T23:59:59.000Z

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  14. Environmental Science: Sample Pathway

    E-Print Network [OSTI]

    Goldberg, Bennett

    Environmental Science: Sample Pathway Semester I Semester II Freshman Year CGS Core CGS Core GE 100 & 124) MA 115 Statistics Summer Environmental Internship Junior Year CH 171 Chem for Health Sciences CH in Environmental Sciences is 17 courses. Courses taken to satisfy CAS major requirements (required, principal, core

  15. Methodology for Defining Gap Areas between Course-over-ground Locations

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30T23:59:59.000Z

    Finding all areas that lie outside some distance d from a polyline is a problem with many potential applications. This application of the Visual Sample Plan (VSP) software required finding all areas that were more than distance d from a set of existing paths (roads and trails) represented by polylines. An outer container polygon (known in VSP as a “sample area”) defines the extents of the area of interest. The term “gap area” was adopted for this project, but another useful term might be “negative coverage area.” The project required a polygon solution rather than a raster solution. The search for a general solution provided no results, so this methodology was developed

  16. DIGITAL TECHNOLOGY BUSINESS CASE METHODOLOGY GUIDE & WORKBOOK

    SciTech Connect (OSTI)

    Thomas, Ken; Lawrie, Sean; Hart, Adam; Vlahoplus, Chris

    2014-09-01T23:59:59.000Z

    Performance advantages of the new digital technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. This Business Case Methodology approaches building a business case for a particular technology or suite of technologies by detailing how they impact an operator in one or more of the three following areas: Labor Costs, Non-Labor Costs, and Key Performance Indicators (KPIs). Key to those impacts will be identifying where the savings are “harvestable,” meaning they result in an actual reduction in headcount and/or cost. The report consists of a Digital Technology Business Case Methodology Guide and an accompanying spreadsheet workbook that will enable the user to develop a business case.

  17. Visual Sample Plan (VSP) Models and Code Verification

    SciTech Connect (OSTI)

    Gilbert, Richard O.; Davidson, James R.; Wilson, John E.; Pulsipher, Brent A.

    2001-03-06T23:59:59.000Z

    VSP is an easy to use, visual and graphic software tool being developed to select the right number and location of environmental samples so that the results of statistical tests performed to provide input to environmental decisions have the required confidence and performance. It is a significant help for implementing the 6th and 7th steps of the Data Quality Objectives (DQO) planning process ("Specify Tolerable Limits on Decision Errors" and "Optimize the Design for Obtaining Data," respectively).

  18. Logical Error Rate Scaling of the Toric Code

    E-Print Network [OSTI]

    Fern H. E. Watson; Sean D. Barrett

    2014-09-26T23:59:59.000Z

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead -- the total number of physical qubits required to perform error correction.

  19. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect (OSTI)

    Stynes, J. K.; Ihas, B.

    2012-04-01T23:59:59.000Z

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  20. Characterization of sampling cyclones

    E-Print Network [OSTI]

    Moore, Murray Edward

    1986-01-01T23:59:59.000Z

    Farland, who' provided an excellent opportunity for the enhancement of my engineering career. To Dr. Best for his patient snd competent assistance in this project. To Dr. Parish who gave his service to my graduate committee. To Bob DeOtte and Carlos Ortiz... in air sampling standards, several different samplers have been developed which utilize either inertial impaction or cyclonic flow fractionation techniques. For example, a 10 pm cutpoint size selective inlet was developed by McFarland, Ortiz...

  1. Wind Power Forecasting Error Distributions: An International Comparison; Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.

    2012-09-01T23:59:59.000Z

    Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

  2. Universal Framework for Quantum Error-Correcting Codes

    E-Print Network [OSTI]

    Zhuo Li; Li-Juan Xing

    2009-01-04T23:59:59.000Z

    We present a universal framework for quantum error-correcting codes, i.e., the one that applies for the most general quantum error-correcting codes. This framework is established on the group algebra, an algebraic notation for the nice error bases of quantum systems. The nicest thing about this framework is that we can characterize the properties of quantum codes by the properties of the group algebra. We show how it characterizes the properties of quantum codes as well as generates some new results about quantum codes.

  3. Post-Award Deliverables Sample (Second Part of Sample Deliverables...

    Broader source: Energy.gov (indexed) [DOE]

    samplereptgrqmts.doc More Documents & Publications ESPC Sample Deliverables for Task Orders (IDIQ Attachment. J-4) Sample Statement of Work - Standard Service Offerings for...

  4. SAMPLE: Parity Violating Electron Scattering from Hydrogen and Deuterium

    E-Print Network [OSTI]

    E. J. Beise; J. Arrington; D. H. Beck; E. Candell; R. Carr; G. Dodson; K. Dow; F. Duncan; M. Farkhondeh; B. W. Filippone; T. FOrest; H. Gao; W. Korsch; S. Kowalski; A. Lung; R. D. McKeown; R. Mohring; B. A. Mueller; J. Napolitano; M. Pitt; N. Simicevic; E. Tsentalovich; S. Wells

    1996-02-06T23:59:59.000Z

    Recently, there has been considerable theoretical interest in determining strange quark contributions to hadronic matrix elements. Such matrix elements can be accessed through the nucleon's neutral weak form factors as determined in parity violating electron scattering. The SAMPLE experiment will measure the strange magnetic form factor $G_M^s$ at low momentum transfer. By combining measurements from hydrogen and deuterium the theoretical uncertainties in the measurement can be greatly reduced and the result will be limited by experimental errors only. A summary of recent progress on the SAMPLE experiment is presented.

  5. Assessment of rural energy resources; Methodological guidelines

    SciTech Connect (OSTI)

    Rijal, K.; Bansal, N.K.; Grover, P.D. (Centre for Energy Studies, Indian Inst. of Technology, Hauz Khas, New Delhi 11016 (IN))

    1990-01-01T23:59:59.000Z

    This article presents the methodological guidelines used to assess rural energy resources with an example of its application in three villages each from different physiographic zones of Nepal. Existing energy demand patterns of villages are compared with estimated resource availability, and rural energy planning issues are discussed. Economics and financial supply price of primary energy resources are compared, which provides insight into defective energy planning and policy formulation and implication in the context of rural areas of Nepal. Though aware of the formidable consequences, the rural populace continues to exhaust the forest as they are unable to find financially cheaper alternatives. Appropriate policy measures need to be devised by the government to promote the use of economically cost-effective renewable energy resources so as to change the present energy usage pattern to diminish the environmental impact caused by over exploitation of forest resources beyond their regenerative capacity.

  6. An optimally designed stack effluent sampling system with transpiration for active transmission enhancement

    E-Print Network [OSTI]

    Schroeder, Troy J.

    1995-01-01T23:59:59.000Z

    ) standard number N13. 1 for sampling methodology that is to be used at locations selected by the methodologies of EPA Method l. ANSI N13. 1 requires the use of sharp-edged isokinetic probes if particles larger than 5 Itm are anticipated to be present..., there is minimal effect on transmission. Prototype Equipment Certification Various tests were performed on the prototype CEM-SETS to insure it's field worthiness. One critical test was the leak test. The current methodology used in the EPA Methods 5 and 17...

  7. Decoupled Sampling for Graphics Pipelines

    E-Print Network [OSTI]

    Ragan-Kelley, Jonathan Millar

    We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at ...

  8. Fluid sampling apparatus and method

    DOE Patents [OSTI]

    Yeamans, David R. (Los Alamos, NM)

    1998-01-01T23:59:59.000Z

    Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis.

  9. Fluid sampling apparatus and method

    DOE Patents [OSTI]

    Yeamans, D.R.

    1998-02-03T23:59:59.000Z

    Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis. 3 figs.

  10. Soil sampling kit and a method of sampling therewith

    DOE Patents [OSTI]

    Thompson, Cyril V. (Knoxville, TN)

    1991-01-01T23:59:59.000Z

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.

  11. Soil sampling kit and a method of sampling therewith

    DOE Patents [OSTI]

    Thompson, C.V.

    1991-02-05T23:59:59.000Z

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allows an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds. 11 figures.

  12. Discrete Sampling Test Plan for the 200-BP-5 Operable Unit

    SciTech Connect (OSTI)

    Sweeney, Mark D.

    2010-02-04T23:59:59.000Z

    The Discrete Groundwater Sampling Project is conducted by the Pacific Northwest National Laboratory (PNNL) on behalf of CH2M HILL Plateau Remediation Company. The project is focused on delivering groundwater samples from proscribed horizons within select groundwater wells residing in the 200-BP-5 Operable Unit (200-BP-5 OU) on the Hanford Site. This document provides the scope, schedule, methodology, and other details of the PNNL discrete sampling effort.

  13. Servo control booster system for minimizing following error

    DOE Patents [OSTI]

    Wise, William L. (Mountain View, CA)

    1985-01-01T23:59:59.000Z

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  14. A Posteriori Error Estimation for - Department of Mathematics ...

    E-Print Network [OSTI]

    Shuhao Cao supervised under Professor Zhiqiang Cai

    2013-10-31T23:59:59.000Z

    Oct 19, 2013 ... the “correct” Hilbert space the true flux µ?1?×u lies in, to recover a ...... The error heat map shows that ZZ-patch recovery estimator leads.

  15. Quantum error correcting codes based on privacy amplification

    E-Print Network [OSTI]

    Zhicheng Luo

    2008-08-10T23:59:59.000Z

    Calderbank-Shor-Steane (CSS) quantum error-correcting codes are based on pairs of classical codes which are mutually dual containing. Explicit constructions of such codes for large blocklengths and with good error correcting properties are not easy to find. In this paper we propose a construction of CSS codes which combines a classical code with a two-universal hash function. We show, using the results of Renner and Koenig, that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. While the bit-flip errors can be decoded as efficiently as the classical code used, the problem of efficiently decoding the phase-flip errors remains open.

  16. avoid vocal errors: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Error Avoiding Quantum Codes Quantum Physics (arXiv) Summary: The existence is proved of a class of open quantum...

  17. Rateless and rateless unequal error protection codes for Gaussian channels

    E-Print Network [OSTI]

    Boyle, Kevin P. (Kevin Patrick)

    2007-01-01T23:59:59.000Z

    In this thesis we examine two different rateless codes and create a rateless unequal error protection code, all for the additive white Gaussian noise (AWGN) channel. The two rateless codes are examined through both analysis ...

  18. An Approximation Algorithm for Constructing Error Detecting Prefix ...

    E-Print Network [OSTI]

    2006-09-02T23:59:59.000Z

    Sep 2, 2006 ... 2-bit Hamming prefix code problem. Our algorithm spends O(n log3 n) time to calculate a 2-bit. Hamming prefix code with an additive error of at ...

  19. Secured Pace Web Server with Collaboration and Error Logging Capabilities

    E-Print Network [OSTI]

    Tao, Lixin

    : Secure Sockets Layer (SSL) using the Java Secure Socket Extension (JSSE) API, error logging............................................................................................ 8 Chapter 3 Secure Pace Web Server with SSL........................................................... 29 3.1 Introduction to SSL

  20. Transition state theory: Variational formulation, dynamical corrections, and error estimates

    E-Print Network [OSTI]

    Van Den Eijnden, Eric

    Transition state theory: Variational formulation, dynamical corrections, and error estimates Eric, Brazil Received 18 February 2005; accepted 9 September 2005; published online 7 November 2005 Transition which aim at computing dynamical corrections to the TST transition rate constant. The theory

  1. YELLOW SEA ACOUSTIC UNCERTAINTY CAUSED BY HYDROGRAPHIC DATA ERROR

    E-Print Network [OSTI]

    Chu, Peter C.

    the littoral and blue waters. After a weapon platform has detected its targets, the sensors on torpedoes, bathymetry, bottom type, and sound speed profiles. Here, the effect of sound speed errors (i.e., hydrographic

  2. Strontium-90 Error Discovered in Subcontract Laboratory Spreadsheet

    SciTech Connect (OSTI)

    D. D. Brown A. S. Nagel

    1999-07-31T23:59:59.000Z

    West Valley Demonstration Project health physicists and environment scientists discovered a series of errors in a subcontractor's spreadsheet being used to reduce data as part of their strontium-90 analytical process.

  3. Sensitivity of OFDM Systems to Synchronization Errors and Spatial Diversity

    E-Print Network [OSTI]

    Zhou, Yi

    2012-02-14T23:59:59.000Z

    jitter cause inter-carrier interference. The overall system performance in terms of symbol error rate is limited by the inter-carrier interference. For a reliable information reception, compensatory measures must be taken. The second part...

  4. Diagnosing multiplicative error by lensing magnification of type Ia supernovae

    E-Print Network [OSTI]

    Zhang, Pengjie

    2015-01-01T23:59:59.000Z

    Weak lensing causes spatially coherent fluctuations in flux of type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to measure accurately in auto correlation, its cross correlation with cosmic shear and galaxy distribution in overlapping area can be measured to significantly higher accuracy. Therefore these cross correlations can put useful constraint on multiplicative error, and the obtained constraint is free of cosmic variance in weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with $\\sim 1$ million SNe Ia that can be achieved by the proposed D2k survey with the LSST telescope (Zhan et al. 2008), multiplicative error of $\\sim 0.5\\%$ for source galaxies at $z_s\\sim 1$ can be detected and la...

  5. Model Error Correction for Linear Methods in PET Neuroreceptor Measurements

    E-Print Network [OSTI]

    Renaut, Rosemary

    Model Error Correction for Linear Methods in PET Neuroreceptor Measurements Hongbin Guo address: hguo1@asu.edu (Hongbin Guo) Preprint submitted to NeuroImage December 11, 2008 #12;reached. A new

  6. Universally Valid Error-Disturbance Relations in Continuous Measurements

    E-Print Network [OSTI]

    Atsushi Nishizawa; Yanbei Chen

    2015-05-31T23:59:59.000Z

    In quantum physics, measurement error and disturbance were first naively thought to be simply constrained by the Heisenberg uncertainty relation. Later, more rigorous analysis showed that the error and disturbance satisfy more subtle inequalities. Several versions of universally valid error-disturbance relations (EDR) have already been obtained and experimentally verified in the regimes where naive applications of the Heisenberg uncertainty relation failed. However, these EDRs were formulated for discrete measurements. In this paper, we consider continuous measurement processes and obtain new EDR inequalities in the Fourier space: in terms of the power spectra of the system and probe variables. By applying our EDRs to a linear optomechanical system, we confirm that a tradeoff relation between error and disturbance leads to the existence of an optimal strength of the disturbance in a joint measurement. Interestingly, even with this optimal case, the inequality of the new EDR is not saturated because of doublely existing standard quantum limits in the inequality.

  7. Robust mixtures in the presence of measurement errors

    E-Print Network [OSTI]

    Jianyong Sun; Ata Kaban; Somak Raychaudhury

    2007-09-06T23:59:59.000Z

    We develop a mixture-based approach to robust density modeling and outlier detection for experimental multivariate data that includes measurement error information. Our model is designed to infer atypical measurements that are not due to errors, aiming to retrieve potentially interesting peculiar objects. Since exact inference is not possible in this model, we develop a tree-structured variational EM solution. This compares favorably against a fully factorial approximation scheme, approaching the accuracy of a Markov-Chain-EM, while maintaining computational simplicity. We demonstrate the benefits of including measurement errors in the model, in terms of improved outlier detection rates in varying measurement uncertainty conditions. We then use this approach in detecting peculiar quasars from an astrophysical survey, given photometric measurements with errors.

  8. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-09-12T23:59:59.000Z

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). This experiment requires the use of germanium isotopically enriched in 76Ge. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology, possibly one under development at Nonlinear Ion Dynamics (NID), will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL in January 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are reported here. A second sample of isotopically separated copper was provided by NID to PNNL in August 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are also reported here.

  9. Germanium-76 Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Engelhard, Mark H.; Zhu, Zihua

    2011-04-01T23:59:59.000Z

    The MAJORANA DEMONSTRATOR is a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). The DEMONSTRATOR will utilize 76Ge from Russia, and the first one gram sample was received from the supplier for analysis on April 24, 2011. The Environmental Molecular Sciences facility, a DOE user facility at PNNL, was used to make the required isotopic and chemical purity measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR. The results of this first analysis are reported here.

  10. Stack sampling apparatus

    DOE Patents [OSTI]

    Lind, Randall F; Lloyd, Peter D; Love, Lonnie J; Noakes, Mark W; Pin, Francois G; Richardson, Bradley S; Rowe, John C

    2014-09-16T23:59:59.000Z

    An apparatus for obtaining samples from a structure includes a support member, at least one stabilizing member, and at least one moveable member. The stabilizing member has a first portion coupled to the support member and a second portion configured to engage with the structure to restrict relative movement between the support member and the structure. The stabilizing member is radially expandable from a first configuration where the second portion does not engage with a surface of the structure to a second configuration where the second portion engages with the surface of the structure.

  11. Draft Sample Collection Instrument

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't Your Destiny:Revised Finding of No53197E T ADRAFTJanuaryDominionDowDepartmentPublic5 5Sample

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA group currentBradleyTableSelling7 AugustAFRICAN3u ;;;::Sampling at the Sherwood,

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA groupTuba City, Arizona, DisposalFourthN V O 1CentralGroundwater,Sampling at the

  14. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA groupTuba City, Arizona, DisposalFourthN V O 1CentralGroundwater,Sampling at the4

  15. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA groupTuba City, Arizona, DisposalFourthN V O 1CentralGroundwater,Sampling at

  16. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA groupTuba City, Arizona, DisposalFourthN V O 1CentralGroundwater,Sampling

  17. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA groupTuba City, Arizona, DisposalFourthN V O 1CentralGroundwater,SamplingTuba

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilA groupTuba City, Arizona, DisposalFourthN V O 1CentralGroundwater,SamplingTubaand

  19. TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit

    E-Print Network [OSTI]

    Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired

  20. A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan

    E-Print Network [OSTI]

    Kaeli, David R.

    A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction

  1. Using doppler radar images to estimate aircraft navigational heading error

    DOE Patents [OSTI]

    Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

    2012-07-03T23:59:59.000Z

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  2. Coding Techniques for Error Correction and Rewriting in Flash Memories

    E-Print Network [OSTI]

    Mohammed, Shoeb Ahmed

    2010-10-12T23:59:59.000Z

    CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2010 Major Subject: Electrical Engineering CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial...

  3. Systematic errors in current quantum state tomography tools

    E-Print Network [OSTI]

    Christian Schwemmer; Lukas Knips; Daniel Richart; Tobias Moroder; Matthias Kleinmann; Otfried Gühne; Harald Weinfurter

    2014-07-22T23:59:59.000Z

    Common tools for obtaining physical density matrices in experimental quantum state tomography are shown here to cause systematic errors. For example, using maximum likelihood or least squares optimization for state reconstruction, we observe a systematic underestimation of the fidelity and an overestimation of entanglement. A solution for this problem can be achieved by a linear evaluation of the data yielding reliable and computational simple bounds including error bars.

  4. Fault-Tolerant Thresholds for Encoded Ancillae with Homogeneous Errors

    E-Print Network [OSTI]

    Bryan Eastin

    2006-11-14T23:59:59.000Z

    I describe a procedure for calculating thresholds for quantum computation as a function of error model given the availability of ancillae prepared in logical states with independent, identically distributed errors. The thresholds are determined via a simple counting argument performed on a single qubit of an infinitely large CSS code. I give concrete examples of thresholds thus achievable for both Steane and Knill style fault-tolerant implementations and investigate their relation to threshold estimates in the literature.

  5. Assessment of outdoor radiofrequency electromagnetic field exposure through hotspot localization using kriging-based sequential sampling

    SciTech Connect (OSTI)

    Aerts, Sam, E-mail: sam.aerts@intec.ugent.be; Deschrijver, Dirk; Verloock, Leen; Dhaene, Tom; Martens, Luc; Joseph, Wout

    2013-10-15T23:59:59.000Z

    In this study, a novel methodology is proposed to create heat maps that accurately pinpoint the outdoor locations with elevated exposure to radiofrequency electromagnetic fields (RF-EMF) in an extensive urban region (or, hotspots), and that would allow local authorities and epidemiologists to efficiently assess the locations and spectral composition of these hotspots, while at the same time developing a global picture of the exposure in the area. Moreover, no prior knowledge about the presence of radiofrequency radiation sources (e.g., base station parameters) is required. After building a surrogate model from the available data using kriging, the proposed method makes use of an iterative sampling strategy that selects new measurement locations at spots which are deemed to contain the most valuable information—inside hotspots or in search of them—based on the prediction uncertainty of the model. The method was tested and validated in an urban subarea of Ghent, Belgium with a size of approximately 1 km{sup 2}. In total, 600 input and 50 validation measurements were performed using a broadband probe. Five hotspots were discovered and assessed, with maximum total electric-field strengths ranging from 1.3 to 3.1 V/m, satisfying the reference levels issued by the International Commission on Non-Ionizing Radiation Protection for exposure of the general public to RF-EMF. Spectrum analyzer measurements in these hotspots revealed five radiofrequency signals with a relevant contribution to the exposure. The radiofrequency radiation emitted by 900 MHz Global System for Mobile Communications (GSM) base stations was always dominant, with contributions ranging from 45% to 100%. Finally, validation of the subsequent surrogate models shows high prediction accuracy, with the final model featuring an average relative error of less than 2 dB (factor 1.26 in electric-field strength), a correlation coefficient of 0.7, and a specificity of 0.96. -- Highlights: • We present an iterative measurement and modeling method for outdoor RF-EMF exposure. • Hotspots are rapidly identified, and accurately characterized. • An accurate graphical representation, or heat map, is created, using kriging. • Random validation shows good correlation (0.7) and low relative errors (2 dB)

  6. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-02-01T23:59:59.000Z

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). This experiment requires the use of germanium isotopically enriched in 76Ge. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  7. Sample holder with optical features

    DOE Patents [OSTI]

    Milas, Mirko; Zhu, Yimei; Rameau, Jonathan David

    2013-07-30T23:59:59.000Z

    A sample holder for holding a sample to be observed for research purposes, particularly in a transmission electron microscope (TEM), generally includes an external alignment part for directing a light beam in a predetermined beam direction, a sample holder body in optical communication with the external alignment part and a sample support member disposed at a distal end of the sample holder body opposite the external alignment part for holding a sample to be analyzed. The sample holder body defines an internal conduit for the light beam and the sample support member includes a light beam positioner for directing the light beam between the sample holder body and the sample held by the sample support member.

  8. Microsphere estimates of blood flow: Methodological considerations

    SciTech Connect (OSTI)

    von Ritter, C.; Hinder, R.A.; Womack, W.; Bauerfeind, P.; Fimmel, C.J.; Kvietys, P.R.; Granger, D.N.; Blum, A.L. (Univ. of the Witwatersrand, Johannesburg (South Africa) Louisianna State Univ. Medical Center, Shreveport (USA) Universitaire Vaudois (Switzerland))

    1988-02-01T23:59:59.000Z

    The microsphere technique is a standard method for measuring blood flow in experimental animals. Sporadic reports have appeared outlining the limitations of this method. In this study the authors have systematically assessed the effect of blood withdrawals for reference sampling, microsphere numbers, and anesthesia on blood flow estimates using radioactive microspheres in dogs. Experiments were performed on 18 conscious and 12 anesthetized dogs. Four blood flow estimates were performed over 120 min using 1 {times} 10{sup 6} microspheres each time. The effects of excessive numbers of microspheres pentobarbital sodium anesthesia, and replacement of volume loss for reference samples with dextran 70 were assessed. In both conscious and anesthetized dogs a progressive decrease in gastric mucosal blood flow and cardiac output was observed over 120 min. This was also observed in the pancreas in conscious dogs. The major factor responsible for these changes was the volume loss due to the reference sample withdrawals. Replacement of the withdrawn blood with dextran 70 led to stable blood flows to all organs. The injection of excessive numbers of microspheres did not modify hemodynamics to a greater extent than did the injection of 4 million microspheres. Anesthesia exerted no influence on blood flow other than raising coronary flow. The authors conclude that although blood flow to the gastric mucosa and the pancreas is sensitive to the minor hemodynamic changes associated with the microsphere technique, replacement of volume loss for reference samples ensures stable blood flow to all organs over a 120-min period.

  9. After a Disaster: Lessons in Survey Methodology from Hurricane Katrina

    E-Print Network [OSTI]

    Swanson, David A; Henderson, Tammy; Sirois, Maria; Chen, Angela; Airriess, Christopher; Banks, David

    2009-01-01T23:59:59.000Z

    of Labor. (2005). Effects of Hurricane Katrina on local areaSurvey Methodology from Hurricane Katrina Tammy L. Hendersonto study the impact of Hurricane Katrina. The current

  10. assessment committee methodology: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Shinozuka, Masanobu 49 The Future of Natural Gas Supplementary Paper SP2.1 Natural Gas Resource Assessment Methodologies CiteSeer Summary: Techniques for estimation of...

  11. Quality Guidline for Cost Estimation Methodology for NETL Assessments...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    20111455 National Energy Technology Laboratory Office of Program Performance and Benefits 2 Power Plant Cost Estimation Methodology Quality Guidelines for Energy System Studies...

  12. PROLIFERATION RESISTANCE AND PHYSICAL PROTECTION WORKING GROUP: METHODOLOGY AND APPLICATIONS

    SciTech Connect (OSTI)

    Bari R. A.; Whitlock, J.; Therios, I.U.; Peterson, P.F.

    2012-11-14T23:59:59.000Z

    We summarize the technical progress and accomplishments on the evaluation methodology for proliferation resistance and physical protection (PR and PP) of Generation IV nuclear energy systems. We intend the results of the evaluations performed with the methodology for three types of users: system designers, program policy makers, and external stakeholders. The PR and PP Working Group developed the methodology through a series of demonstration and case studies. Over the past few years various national and international groups have applied the methodology to nuclear energy system designs as well as to developing approaches to advanced safeguards.

  13. UNFCCC-Consolidated baseline and monitoring methodology for landfill...

    Open Energy Info (EERE)

    UNFCCC-Consolidated baseline and monitoring methodology for landfill gas project activities Jump to: navigation, search Tool Summary LAUNCH TOOL Name: UNFCCC-Consolidated baseline...

  14. Methodology for Estimating Reductions of GHG Emissions from Mosaic...

    Open Energy Info (EERE)

    Methodology for Estimating Reductions of GHG Emissions from Mosaic Deforestation AgencyCompany Organization: World Bank Sector: Land Focus Area: Forestry Topics: Co-benefits...

  15. Egs Exploration Methodology Project Using the Dixie Valley Geothermal...

    Open Energy Info (EERE)

    System, Nevada, Status Update Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference Paper: Egs Exploration Methodology Project Using the Dixie Valley...

  16. Towards Developing a Calibrated EGS Exploration Methodology Using...

    Open Energy Info (EERE)

    Geothermal System, Nevada Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference Paper: Towards Developing a Calibrated EGS Exploration Methodology...

  17. Methodology for Assesment of Urban Water Planning Objectives

    E-Print Network [OSTI]

    Meier, W. L.; Thornton, B. M.

    TR-51 1973 Methodology for Assessment of Urban Water Planning Objectives W.L. Meier B.M. Thornton Texas Water Resources Institute Texas A&M University ...

  18. Energy Efficiency Standards for Refrigerators in Brazil: A Methodology...

    Open Energy Info (EERE)

    Impact Evaluation Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Efficiency Standards for Refrigerators in Brazil: A Methodology for Impact Evaluation Focus...

  19. Survey of Transmission Cost Allocation Methodologies for Regional Transmission Organizations

    SciTech Connect (OSTI)

    Fink, S.; Porter, K.; Mudd, C.; Rogers, J.

    2011-02-01T23:59:59.000Z

    The report presents transmission cost allocation methodologies for reliability transmission projects, generation interconnection, and economic transmission projects for all Regional Transmission Organizations.

  20. analysis methodology based: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ...9 1.3.2 Full 3D TomographyA UNIFIED METHODOLOGY FOR SEISMIC WAVEFORM ANALYSIS AND INVERSION by Po Chen A Dissertation program for tomgraphic...

  1. National Academies Criticality Methodology and Assessment Video (Text Version)

    Broader source: Energy.gov [DOE]

    This is a text version of the "National Academies Criticality Methodology and Assessment" video presented at the Critical Materials Workshop, held on April 3, 2012 in Arlington, Virginia.

  2. ari methodology modeling: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    you win or lose: The importance of the overall Yamamoto, Hitoshi 253 A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London CERN Preprints...

  3. Sample Environment Plans and Progress

    E-Print Network [OSTI]

    Pennycook, Steve

    Sample Environment Plans and Progress at the SNS & HFIR SNS HFIR User Group Meeting American Conference on Neutron Scattering Ottawa, Canada June 26 ­ 30, 2010 Lou Santodonato Sample Environment Group our sample environment capabilities Feedback SHUG meetings User surveys Sample Environment Steering

  4. Fluid sampling tool

    DOE Patents [OSTI]

    Johnston, Roger G. (Los Alamos, NM); Garcia, Anthony R. E. (Espanola, NM); Martinez, Ronald K. (Santa Cruz, NM)

    2001-09-25T23:59:59.000Z

    The invention includes a rotatable tool for collecting fluid through the wall of a container. The tool includes a fluid collection section with a cylindrical shank having an end portion for drilling a hole in the container wall when the tool is rotated, and a threaded portion for tapping the hole in the container wall. A passageway in the shank in communication with at least one radial inlet hole in the drilling end and an opening at the end of the shank is adapted to receive fluid from the container. The tool also includes a cylindrical chamber affixed to the end of the shank opposite to the drilling portion thereof for receiving and storing fluid passing through the passageway. The tool also includes a flexible, deformable gasket that provides a fluid-tight chamber to confine kerf generated during the drilling and tapping of the hole. The invention also includes a fluid extractor section for extracting fluid samples from the fluid collecting section.

  5. A new and efficient error resilient entropy code for image and video compression

    E-Print Network [OSTI]

    Min, Jungki

    1999-01-01T23:59:59.000Z

    Image and video compression standards such as JPEG, MPEG, H.263 are severely sensitive to errors. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization causes the worst result. Even one bit error...

  6. Error Monitoring: A Learning Strategy for Improving Academic Performance of LD Adolescents

    E-Print Network [OSTI]

    Schumaker, Jean B.; Deshler, Donald D.; Nolan, Susan; Clark, Frances L.; Alley, Gordon R.; Warner, Michael M.

    1981-04-01T23:59:59.000Z

    Error monitoring, a learning strategy for detecting and correcting errors in written products, was taught to nine learning disabled adolescents. Students could detect and correct more errors after they received training ...

  7. Assessing the Impact of Differential Genotyping Errors on Rare Variant Tests of Association

    E-Print Network [OSTI]

    Fast, Shannon Marie

    Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential ...

  8. RAMS (Risk Analysis - Modular System) methodology

    SciTech Connect (OSTI)

    Stenner, R.D.; Strenge, D.L.; Buck, J.W. [and others

    1996-10-01T23:59:59.000Z

    The Risk Analysis - Modular System (RAMS) was developed to serve as a broad scope risk analysis tool for the Risk Assessment of the Hanford Mission (RAHM) studies. The RAHM element provides risk analysis support for Hanford Strategic Analysis and Mission Planning activities. The RAHM also provides risk analysis support for the Hanford 10-Year Plan development activities. The RAMS tool draws from a collection of specifically designed databases and modular risk analysis methodologies and models. RAMS is a flexible modular system that can be focused on targeted risk analysis needs. It is specifically designed to address risks associated with overall strategy, technical alternative, and `what if` questions regarding the Hanford cleanup mission. RAMS is set up to address both near-term and long-term risk issues. Consistency is very important for any comparative risk analysis, and RAMS is designed to efficiently and consistently compare risks and produce risk reduction estimates. There is a wide range of output information that can be generated by RAMS. These outputs can be detailed by individual contaminants, waste forms, transport pathways, exposure scenarios, individuals, populations, etc. However, they can also be in rolled-up form to support high-level strategy decisions.

  9. Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 0

    SciTech Connect (OSTI)

    Marutzky, Sam; Farnham, Irene

    2014-10-01T23:59:59.000Z

    The purpose of the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan) is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP). The Plan’s scope comprises sample collection and analysis requirements relevant to assessing the extent of groundwater contamination from underground nuclear testing. This Plan identifies locations to be sampled by corrective action unit (CAU) and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well-purging requirements, detection levels, and accuracy requirements; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling of interest to UGTA. This Plan does not address compliance with requirements for wells that supply the NNSS public water system or wells involved in a permitted activity.

  10. Development of Analytical Methodology for Neurochemical Investigations

    E-Print Network [OSTI]

    Fischer, David John

    2010-01-25T23:59:59.000Z

    for simultaneous immunological and enzymatic assays. Figure 2.19 Microchip device for point-of-care analysis of lithium with integrated sampling from a glass capillary. Figure 2.20 Picture of a three electrode paper-based microfluidic device with EC... modes of EC detection, fabrication strategies for electrodes and microchips, and 5 integration of electrodes into microfluidic devices are detailed. In addition, the use of microchip electrophoresis with EC detection for a variety of applications...

  11. SHEAN (Simplified Human Error Analysis code) and automated THERP

    SciTech Connect (OSTI)

    Wilson, J.R.

    1993-06-01T23:59:59.000Z

    One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN.

  12. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    SciTech Connect (OSTI)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01T23:59:59.000Z

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  13. Benchmarking Methodology for Embedded Scalable Platforms Paolo Mantovani

    E-Print Network [OSTI]

    a diversity of embedded application workloads. A companion methodology combines full-system simulation, pre1 Benchmarking Methodology for Embedded Scalable Platforms Paolo Mantovani , Emilio G. Cota performance [2], [4]. Accelerators can offer 2 to 3 orders-of-magnitude higher efficiency than soft- ware

  14. A Methodology for the Derivation of Parallel Programs

    E-Print Network [OSTI]

    Goodman, Joy

    A Methodology for the Derivation of Parallel Programs Joy Goodman Department of Computer Science, University of Glasgow Abstract. I am currently developing a methodology for deriving paral­ lel programs from equational reasoning, a more efficient parallel program in a variety of languages and styles can be derived

  15. APPENDIX B: RADIOLOGICAL DATA METHODOLOGIES 1998 SITE ENVIRONMENTAL REPORTB-1

    E-Print Network [OSTI]

    APPENDIX B: RADIOLOGICAL DATA METHODOLOGIES 1998 SITE ENVIRONMENTAL REPORTB-1 APPENDIX B Radiological Data Methodologies 1. DOSE CALCULATION - ATMOSPHERIC RELEASE PATHWAY Dispersion of airborne and distance. Facility-specific radionuclide release rates (in Ci per year) were also used. All annual site

  16. ORNL/TM-2008/105 Cost Methodology for Biomass

    E-Print Network [OSTI]

    Pennycook, Steve

    ORNL/TM-2008/105 Cost Methodology for Biomass Feedstocks: Herbaceous Crops and Agricultural Resource and Engineering Systems Environmental Sciences Division COST METHODOLOGY FOR BIOMASS FEESTOCKS ....................................................................................................... 3 2.1.1 Integrated Biomass Supply Analysis and Logistics Model (IBSAL).......................... 6 2

  17. A Genetic Programming Methodology for Strategy Optimization Under Uncertainty

    E-Print Network [OSTI]

    Fernandez, Thomas

    the Missile Countermeasures Optimization (MCO) problem as an instance of a strategy optimization problem; describes various types and degrees of uncertainty that may be introduced into the MCO problem; and develops a new methodology for solving the MCO problem under conditions of uncertainty. The new methodology

  18. A Partial Memory Incremental Learning Methodology And Its Application To

    E-Print Network [OSTI]

    Maloof, Mark

    , learning and recognition times, the types of concepts induced by the method, and the types of data fromA Partial Memory Incremental Learning Methodology And Its Application To Computer Intrusion Learning Methodology and its Application to Computer Intrusion Detection Marcus A. Maloof and Ryszard S

  19. Web Based Simulations for Virtual Scientific Experiment: Methodology and Tools

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    . These are the keywords. Web based simulation, Virtual Scientific Experiment, e-learning 1. INTRODUCTION Until now Technology for Enhanced Learning 1 #12;Web Based Simulations for Virtual Scientific Experiment: MethodologyWeb Based Simulations for Virtual Scientific Experiment: Methodology and Tools Giovannina Albano

  20. A New Methodology for Aircraft HVDC Power Systems design

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    A New Methodology for Aircraft HVDC Power Systems design D. Hernández, M. Sautreuil, N. Retière, D-mail: olivier.sename@gipsa-lab.inpg.fr Abstract ­ A new methodology for aircraft HVDC power systems design

  1. Methodology for Service Development in a Distributed Smart Home Environment

    E-Print Network [OSTI]

    Methodology for Service Development in a Distributed Smart Home Environment Master Thesis by Cezary the author of this thesis, titled "Methodology for Service Devel- opment in a distributed Smart Home Technology in Munich. Furthermore I would like to thanks to whole Smart Home team for giving me valuable

  2. Specified assurance level sampling procedure

    SciTech Connect (OSTI)

    Willner, O.

    1980-11-01T23:59:59.000Z

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level.

  3. Sampling for Bacteria in Wells 

    E-Print Network [OSTI]

    Lesikar, Bruce J.

    2001-11-15T23:59:59.000Z

    Sampling for Bacteria in Wells E-126 11/01 Water samples for bacteria tests must always be col- lected in a sterile container. The procedure for collect- ing a water sample is as follows: 1. Obtain a sterile container from a Health Department... immediately after collecting water sample. Refrigerate the sample and transport it to the laborato- ry (in an ice chest) as soon after collection as possible (six hours is best, but up to 30 hours). Many labs will not accept bacteria samples on Friday so check...

  4. Status of Activities to Implement a Sustainable System of MC&A Equipment and Methodological Support at Rosatom Facilities

    SciTech Connect (OSTI)

    J.D. Sanders

    2010-07-01T23:59:59.000Z

    Under the U.S.-Russian Material Protection, Control and Accounting (MPC&A) Program, the Material Control and Accounting Measurements (MCAM) Project has supported a joint U.S.-Russian effort to coordinate improvements of the Russian MC&A measurement system. These efforts have resulted in the development of a MC&A Equipment and Methodological Support (MEMS) Strategic Plan (SP), developed by the Russian MEM Working Group. The MEMS SP covers implementation of MC&A measurement equipment, as well as the development, attestation and implementation of measurement methodologies and reference materials at the facility and industry levels. This paper provides an overview of the activities conducted under the MEMS SP, as well as a status on current efforts to develop reference materials, implement destructive and nondestructive assay measurement methodologies, and implement sample exchange, scrap and holdup measurement programs across Russian nuclear facilities.

  5. Methodology for Scaling Fusion Power Plant Availability

    SciTech Connect (OSTI)

    Lester M. Waganer

    2011-01-04T23:59:59.000Z

    Normally in the U.S. fusion power plant conceptual design studies, the development of the plant availability and the plant capital and operating costs makes the implicit assumption that the plant is a 10th of a kind fusion power plant. This is in keeping with the DOE guidelines published in the 1970s, the PNL report1, "Fusion Reactor Design Studies - Standard Accounts for Cost Estimates. This assumption specifically defines the level of the industry and technology maturity and eliminates the need to define the necessary research and development efforts and costs to construct a one of a kind or the first of a kind power plant. It also assumes all the "teething" problems have been solved and the plant can operate in the manner intended. The plant availability analysis assumes all maintenance actions have been refined and optimized by the operation of the prior nine or so plants. The actions are defined to be as quick and efficient as possible. This study will present a methodology to enable estimation of the availability of the one of a kind (one OAK) plant or first of a kind (1st OAK) plant. To clarify, one of the OAK facilities might be the pilot plant or the demo plant that is prototypical of the next generation power plant, but it is not a full-scale fusion power plant with all fully validated "mature" subsystems. The first OAK facility is truly the first commercial plant of a common design that represents the next generation plant design. However, its subsystems, maintenance equipment and procedures will continue to be refined to achieve the goals for the 10th OAK power plant.

  6. Development of an integrated system for estimating human error probabilities

    SciTech Connect (OSTI)

    Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

    1998-12-01T23:59:59.000Z

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

  7. Representing cognitive activities and errors in HRA trees

    SciTech Connect (OSTI)

    Gertman, D.I.

    1992-01-01T23:59:59.000Z

    A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst's understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.

  8. Representing cognitive activities and errors in HRA trees

    SciTech Connect (OSTI)

    Gertman, D.I.

    1992-05-01T23:59:59.000Z

    A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst`s understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.

  9. Reducing Collective Quantum State Rotation Errors with Reversible Dephasing

    E-Print Network [OSTI]

    Kevin C. Cox; Matthew A. Norcia; Joshua M. Weiner; Justin G. Bohnet; James K. Thompson

    2014-07-16T23:59:59.000Z

    We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of $2.1 \\times 10^5$ laser cooled and trapped $^{87}$Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.

  10. Meta learning of bounds on the Bayes classifier error

    E-Print Network [OSTI]

    Moon, Kevin R; Hero, Alfred O

    2015-01-01T23:59:59.000Z

    Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.

  11. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect (OSTI)

    Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

    2013-08-25T23:59:59.000Z

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.

  12. SU-E-T-170: Evaluation of Rotational Errors in Proton Therapy Planning of Lung Cancer

    SciTech Connect (OSTI)

    Rana, S; Zhao, L; Ramirez, E; Singh, H; Zheng, Y [Procure Proton Therapy Center, Oklahoma City, OK (United States)

    2014-06-01T23:59:59.000Z

    Purpose: To investigate the impact of rotational (roll, yaw, and pitch) errors in proton therapy planning of lung cancer. Methods: A lung cancer case treated at our center was used in this retrospective study. The original plan was generated using two proton fields (posterior-anterior and left-lateral) with XiO treatment planning system (TPS) and delivered using uniform scanning proton therapy system. First, the computed tomography (CT) set of original lung treatment plan was re-sampled for rotational (roll, yaw, and pitch) angles ranged from ?5° to +5°, with an increment of 2.5°. Second, 12 new proton plans were generated in XiO using the 12 re-sampled CT datasets. The same beam conditions, isocenter, and devices were used in new treatment plans as in the original plan. All 12 new proton plans were compared with original plan for planning target volume (PTV) coverage and maximum dose to spinal cord (cord Dmax). Results: PTV coverage was reduced in all 12 new proton plans when compared to that of original plan. Specifically, PTV coverage was reduced by 0.03% to 1.22% for roll, by 0.05% to 1.14% for yaw, and by 0.10% to 3.22% for pitch errors. In comparison to original plan, the cord Dmax in new proton plans was reduced by 8.21% to 25.81% for +2.5° to +5° pitch, by 5.28% to 20.71% for +2.5° to +5° yaw, and by 5.28% to 14.47% for ?2.5° to ?5° roll. In contrast, cord Dmax was increased by 3.80% to 3.86% for ?2.5° to ?5° pitch, by 0.63% to 3.25% for ?2.5° to ?5° yaw, and by 3.75% to 4.54% for +2.5° to +5° roll. Conclusion: PTV coverage was reduced by up to 3.22% for rotational error of 5°. The cord Dmax could increase or decrease depending on the direction of rotational error, beam angles, and the location of lung tumor.

  13. Trade-off between the tolerance of located and unlocated errors in nondegenerate quantum error-correcting codes

    E-Print Network [OSTI]

    Henry L. Haselgrove; Peter P. Rohde

    2007-07-03T23:59:59.000Z

    In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected with high probability by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the quantum coherent information. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.

  14. Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU

    E-Print Network [OSTI]

    Haque, Imran S

    2009-01-01T23:59:59.000Z

    Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...

  15. Are you getting an error message in UniFi Plus? (suggestion...check the auto-hint line!) In most cases, Unifi Plus does not prominently display error messages; instead, the error message will be

    E-Print Network [OSTI]

    Peak, Derek

    Are you getting an error message in UniFi Plus? (suggestion...check the auto-hint line!) In most cases, Unifi Plus does not prominently display error messages; instead, the error message and processing messages Keyboard shortcuts Instructions for accessing other blocks, windows or forms from

  16. Error estimates and specification parameters for functional renormalization

    SciTech Connect (OSTI)

    Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)

    2013-07-15T23:59:59.000Z

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  17. JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned

    SciTech Connect (OSTI)

    Frank Marhauser

    2011-09-01T23:59:59.000Z

    Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.

  18. Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds

    SciTech Connect (OSTI)

    Guth, Larry, E-mail: lguth@math.mit.edu [Department of Mathematics, MIT, Cambridge, Massachusetts 02139 (United States); Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il [Institute of Mathematics, Hebrew University, Jerusalem 91904 (Israel)

    2014-08-15T23:59:59.000Z

    Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ?}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

  19. Full protection of superconducting qubit systems from coupling errors

    E-Print Network [OSTI]

    M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley

    2005-08-09T23:59:59.000Z

    Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.

  20. Laser Phase Errors in Seeded Free Electron Lasers

    SciTech Connect (OSTI)

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-04-17T23:59:59.000Z

    Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  1. Correctable noise of Quantum Error Correcting Codes under adaptive concatenation

    E-Print Network [OSTI]

    Jesse Fern

    2008-02-27T23:59:59.000Z

    We examine the transformation of noise under a quantum error correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum fault tolerant thresholds.

  2. 3 - DJ : sampling as design

    E-Print Network [OSTI]

    Patel, Sayjel Vijay

    2015-01-01T23:59:59.000Z

    3D Sampling is introduced as a new spatial craft that can be applied to architectural design, akin to how sampling is applied in the field of electronic music. Through the development of 3-DJ, a prototype design software, ...

  3. Sampling for Bacteria in Wells

    E-Print Network [OSTI]

    Lesikar, Bruce J.

    2001-11-15T23:59:59.000Z

    Sampling for Bacteria in Wells E-126 11/01 Water samples for bacteria tests must always be col- lected in a sterile container. The procedure for collect- ing a water sample is as follows: 1. Obtain a sterile container from a Health Department...

  4. ON ADAPTIVE SAMPLING Philippe Flajolet

    E-Print Network [OSTI]

    Flajolet, Philippe

    . We analyze the storage/accuracy trade--off of an adaptive sampling algorithm due to Wegman that makes. Wegman [11] has proposed an interesting alternative solution to that problem based on Adaptive Sampling 4. 2 Wegman's Adaptive Sampling Method The problem discussed here is the following. We are given

  5. Spectral Thompson Sampling Tomas Kocak

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Spectral Thompson Sampling Tom´as Koc´ak SequeL team INRIA Lille - Nord Europe France Michal Valko Thompson Sampling (TS) has surged a lot of interest due to its good empirical performance, in particular that our algorithm is com- petitive on both synthetic and real-world data. 1 Introduction Thompson Sampling

  6. Technical bases and guidance for the use of composite soil sampling for demonstrating compliance with radiological release criteria

    SciTech Connect (OSTI)

    Vitkus, Timothy J. [Oak Ridge Institute for Science and Education, Oak Ridge, TN (United States). Independent Environmental Assessment and Verification Program

    2012-04-24T23:59:59.000Z

    This guidance provides information on methodologies and the technical bases that licensees should consider for incorporating composite sampling strategies into final status survey (FSS) plans. In addition, this guidance also includes appropriate uses of composite sampling for generating the data for other decommissioning site investigations such as characterization or other preliminary site investigations.

  7. Soft Error Modeling and Protection for Sequential Elements Hossein Asadi and Mehdi B. Tahoori

    E-Print Network [OSTI]

    on system-level soft error rate. The number of clock cycles required for an error in a bistable to be propagated to system outputs is used to measure the vulnerability of bistables to soft errors. 1 Introduction, soft errors become the main reliability concern during lifetime operation of digital systems. Soft

  8. Distinguishing congestion and error losses: an ECN/ELN based scheme

    E-Print Network [OSTI]

    Kamakshisundaram, Raguram

    2001-01-01T23:59:59.000Z

    error rates, like wireless links, packets are lost more due to error than due to congestion. But TCP does not differentiate between error and congestion losses and hence reduces the sending rate for losses due to error also, which unnecessarily reduces...

  9. Designing Automation to Reduce Operator Errors Nancy G. Leveson

    E-Print Network [OSTI]

    Leveson, Nancy

    Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of mode­related problems [SW95]. After studying accidents and incidents in the new, highly automated

  10. Measurement Errors in Visual Servoing V. Kyrki ,1

    E-Print Network [OSTI]

    Kragic, Danica

    feedback for closed loop control of a robot motion termed visual servoing has received a significant amount robot trajectory and its uncertainty. The procedures of camera calibration have improved enormously over on the modeling of an error function and thus has a major effect on the robot's trajectory. On the other hand

  11. Energy efficiency of error correction for wireless communication

    E-Print Network [OSTI]

    Havinga, Paul J.M.

    -control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA ­ 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost

  12. Effects of errors in the solar radius on helioseismic inferences

    E-Print Network [OSTI]

    Sarbani Basu

    1997-12-09T23:59:59.000Z

    Frequencies of intermediate-degree f-modes of the Sun seem to indicate that the solar radius is smaller than what is normally used in constructing solar models. We investigate the possible consequences of an error in radius on results for solar structure obtained using helioseismic inversions. It is shown that solar sound speed will be overestimated if oscillation frequencies are inverted using reference models with a larger radius. Using solar models with radius of 695.78 Mm and new data sets, the base of the solar convection zone is estimated to be at radial distance of $0.7135\\pm 0.0005$ of the solar radius. The helium abundance in the convection zone as determined using models with OPAL equation of state is $0.248\\pm 0.001$, where the errors reflect the estimated systematic errors in the calculation, the statistical errors being much smaller. Assuming that the OPAL opacities used in the construction of the solar models are correct, the surface $Z/X$ is estimated to be $0.0245\\pm 0.0006$.

  13. Error field and magnetic diagnostic modeling for W7-X

    SciTech Connect (OSTI)

    Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01T23:59:59.000Z

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  14. Two infinite families of nonadditive quantum error-correcting codes

    E-Print Network [OSTI]

    Sixia Yu; Qing Chen; C. H. Oh

    2009-01-14T23:59:59.000Z

    We construct explicitly two infinite families of genuine nonadditive 1-error correcting quantum codes and prove that their coding subspaces are 50% larger than those of the optimal stabilizer codes of the same parameters via the linear programming bound. All these nonadditive codes can be characterized by a stabilizer-like structure and thus their encoding circuits can be designed in a straightforward manner.

  15. Threshold error rates for the toric and surface codes

    E-Print Network [OSTI]

    D. S. Wang; A. G. Fowler; A. M. Stephens; L. C. L. Hollenberg

    2009-05-05T23:59:59.000Z

    The surface code scheme for quantum computation features a 2d array of nearest-neighbor coupled qubits yet claims a threshold error rate approaching 1% (NJoP 9:199, 2007). This result was obtained for the toric code, from which the surface code is derived, and surpasses all other known codes restricted to 2d nearest-neighbor architectures by several orders of magnitude. We describe in detail an error correction procedure for the toric and surface codes, which is based on polynomial-time graph matching techniques and is efficiently implementable as the classical feed-forward processing step in a real quantum computer. By direct simulation of this error correction scheme, we determine the threshold error rates for the two codes (differing only in their boundary conditions) for both ideal and non-ideal syndrome extraction scenarios. We verify that the toric code has an asymptotic threshold of p = 15.5% under ideal syndrome extraction, and p = 7.8 10^-3 for the non-ideal case, in agreement with prior work. Simulations of the surface code indicate that the threshold is close to that of the toric code.

  16. RESIDUAL TYPE A POSTERIORI ERROR ESTIMATES FOR ELLIPTIC OBSTACLE PROBLEMS

    E-Print Network [OSTI]

    Nochetto, Ricardo H.

    to double obstacle problems are briefly discussed. Key words. a posteriori error estimates, residual Science Foundation under the grant No.19771080 and China National Key Project ``Large Scale Scientific\\Gamma satisfies / Ÿ 0 on @ and K is the convex set of admissible displacements K := fv 2 H 1 0(\\Omega\\Gamma : v

  17. Multilayer Perceptron Error Surfaces: Visualization, Structure and Modelling

    E-Print Network [OSTI]

    Gallagher, Marcus

    . This is commonly formulated as a multivariate non­linear optimization problem over a very high­dimensional space of analysis are not well­suited to this problem. Visualizing and describ­ ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method

  18. Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling

    E-Print Network [OSTI]

    Gallagher, Marcus

    . This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of analysis are not well-suited to this problem. Visualizing and describ- ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method

  19. Analysis of possible systematic errors in the Oslo method

    E-Print Network [OSTI]

    A. C. Larsen; M. Guttormsen; M. Krticka; E. Betak; A. Bürger; A. Görgen; H. T. Nyhus; J. Rekstad; A. Schiller; S. Siem; H. K. Toft; G. M. Tveten; A. V. Voinov; K. Wikan

    2012-11-27T23:59:59.000Z

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of level density and gamma-ray transmission coefficient from a set of particle-gamma coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  20. Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller

    E-Print Network [OSTI]

    Xuan, Dong

    Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller , Nagarjuna and Computer Engineering The Ohio State University {millerti,teodores}@cse.ohio-state.edu, nagarjun. To deal with these com- peting trends, energy-efficient solutions are needed to deal with reli- ability

  1. Considering Workload Input Variations in Error Coverage Estimation

    E-Print Network [OSTI]

    Karlsson, Johan

    different parts of the workload code to be executed different number of times. By using the results from in the workload input when estimating error detection coverage using fault injection are investigated. Results sequence based on results from fault injection experiments with another input sequence is presented

  2. Data aware, Low cost Error correction for Wireless Sensor Networks

    E-Print Network [OSTI]

    California at San Diego, University of

    Data aware, Low cost Error correction for Wireless Sensor Networks Shoubhik Mukhopadhyay, Debashis challenges in adoption and deployment of wireless networked sensing applications is ensuring reliable sensor of such applications. A wireless sensor network is inherently vulnerable to different sources of unreliability

  3. Error Minimization Methods in Biproportional Apportionment Federica Ricca Andrea Scozzari

    E-Print Network [OSTI]

    Serafini, Paolo

    as an alternative to the classical axiomatic approach introduced by Balinski and Demange in 1989. We provide and in the statistical literature. A milestone theoretical setting was given by Balinski and Demange in 1989 [5, 6 a class of methods for Biproportional Apportionment characterized by an "error minimization" approach

  4. DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR

    E-Print Network [OSTI]

    Sambridge, Malcolm

    DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library

  5. Error Exponent for Discrete Memoryless Multiple-Access Channels

    E-Print Network [OSTI]

    Anastasopoulos, Achilleas

    Error Exponent for Discrete Memoryless Multiple-Access Channels by Ali Nazari A dissertation Bayraktar Associate Professor Jussi Keppo #12;c Ali Nazari 2011 All Rights Reserved #12;To my parents. ii Becky Turanski, Nancy Goings, Michele Feldkamp, Ann Pace, Karen Liska and Beth Lawson for efficiently

  6. Time reversal in thermoacoustic tomography - an error estimate

    E-Print Network [OSTI]

    Hristova, Yulia

    2008-01-01T23:59:59.000Z

    The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.

  7. IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming

    E-Print Network [OSTI]

    Waterloo, University of

    IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming Babak Alipanahi1 automatically picked peaks. IPASS is proposed as a novel integer linear programming (ILP) based assignment assignment method. Although a variety of assignment approaches have been developed, none works well on noisy

  8. Research Article Preschool Speech Error Patterns Predict Articulation

    E-Print Network [OSTI]

    -age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological Outcomes in Children With Histories of Speech Sound Disorders Jonathan L. Preston,a,b Margaret Hull disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method

  9. Edinburgh Research Explorer Prevalence and Causes of Prescribing Errors

    E-Print Network [OSTI]

    Hall, Christopher

    of Prescribing Errors: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study Cristi´n Ryan1 , Sarah Kingdom, 7 Health Psychology, University of Aberdeen, Aberdeen, United Kingdom, 8 Clinical Pharmacology

  10. Achievable Error Exponents for the Private Fingerprinting Game

    E-Print Network [OSTI]

    Merhav, Neri

    Achievable Error Exponents for the Private Fingerprinting Game Anelia Somekh-Baruch and Neri Merhav a forgery of the data while aiming at erasing the fingerprints in order not to be detected. Their action have presented and analyzed a game-theoretic model of private2 fingerprinting systems in the presence

  11. RESOLVE Upgrades for on Line Lattice Error Analysis

    SciTech Connect (OSTI)

    Lee, M.; Corbett, J.; White, G.; /SLAC; Zambre, Y.; /Unlisted

    2011-08-25T23:59:59.000Z

    We have increased the speed and versatility of the orbit analysis process by adding a command file, or 'script' language, to RESOLVE. This command file feature enables us to automate data analysis procedures to detect lattice errors. We describe the RESOLVE command file and present examples of practical applications.

  12. Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration

    E-Print Network [OSTI]

    This paper addresses the problem of rejecting interfer- ence due to secondary specular reflections, cross structure, acquisition delay, lack of error recovery, and incorrect modelling of measurement noise. We cause secondary reflections, edges and textures may have a stripe-like appearance, and cross-talk can

  13. Error Control Based Model Reduction for Parameter Optimization of Elliptic

    E-Print Network [OSTI]

    of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material

  14. Development of an Expert System for Classification of Medical Errors

    E-Print Network [OSTI]

    Kopec, Danny

    in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors

  15. Odometry Error Covariance Estimation for Two Wheel Robot Vehicles

    E-Print Network [OSTI]

    Robotics Research Centre Department of Electrical and Computer Systems Engineering Monash University Technical Report MECSE-95-1 1995 ABSTRACT This technical report develops a simple statistical error model of the robot. Other paths can be composed of short segments of constant curvature arcs without great loss

  16. EMERGING MODALITIES FOR SOIL CARBON ANALYSIS: SAMPLING STATISTICS AND ECONOMICS WORKSHOP.

    SciTech Connect (OSTI)

    WIELOPOLSKI, L.

    2006-04-01T23:59:59.000Z

    The workshop's main objectives are (1) to present the emerging modalities for analyzing carbon in soil, (2) to assess their error propagation, (3) to recommend new protocols and sampling strategies for the new instrumentation, and, (4) to compare the costs of the new methods with traditional chemical ones.

  17. Surface photometry of a sample of elliptical and S0 galaxies

    SciTech Connect (OSTI)

    De carvalho, R.R.; Da costa, L.N.; Djorgovski, S. (Observatorio Nacional do Brasil, Sao Cristovao (Brazil) California Institute of Technology, Pasadena (United States))

    1991-08-01T23:59:59.000Z

    The results are reported of surface photometry of 38 early-type galaxies, located mainly in the Fornax Cluster. Detailed comparisons with previously published work are given along with internal and external error estimates for all quantities, and some serious systematic discrepancies in the older aperture photometry of some of the galaxies in the present sample are pointed out. 15 refs.

  18. Stochastic inversion in calculating the spectrum of signals with uneven sampling

    E-Print Network [OSTI]

    Ulich, Thomas

    . (2) Here sampling is considered as a measurement of the signal value and j is the true mea- surement unknonws, measurements and measurement errors into column vectors X = b1 b2 ... bm a2 a3 frequency component. When they are known, the power of the frequency component k is given by Pk = a2 k + b2

  19. Radiochemical Analysis Methodology for uranium Depletion Measurements

    SciTech Connect (OSTI)

    Scatena-Wachel DE

    2007-01-09T23:59:59.000Z

    This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

  20. Methodology for performing measurements to release material from radiological control

    SciTech Connect (OSTI)

    Durham, J.S. [Pacific Northwest Lab., Richland, WA (United States); Gardner, D.L. [Westinghouse Hanford Co., Richland, WA (United States)

    1993-09-01T23:59:59.000Z

    This report describes the existing and proposed methodologies for performing measurements of contamination prior to releasing material for uncontrolled use at the Hanford Site. The technical basis for the proposed methodology, a modification to the existing contamination survey protocol, is also described. The modified methodology, which includes a large-area swipe followed by a statistical survey, can be used to survey material that is unlikely to be contaminated for release to controlled and uncontrolled areas. The material evaluation procedure that is used to determine the likelihood of contamination is also described.

  1. Quantum computing with nearest neighbor interactions and error rates over 1%

    E-Print Network [OSTI]

    David S. Wang; Austin G. Fowler; Lloyd C. L. Hollenberg

    2010-09-20T23:59:59.000Z

    Large-scale quantum computation will only be achieved if experimentally implementable quantum error correction procedures are devised that can tolerate experimentally achievable error rates. We describe a quantum error correction procedure that requires only a 2-D square lattice of qubits that can interact with their nearest neighbors, yet can tolerate quantum gate error rates over 1%. The precise maximum tolerable error rate depends on the error model, and we calculate values in the range 1.1--1.4% for various physically reasonable models. Even the lowest value represents the highest threshold error rate calculated to date in a geometrically constrained setting, and a 50% improvement over the previous record.

  2. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect (OSTI)

    Foo, Jasmine; Wan Xiaoliang [Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912 (United States); Karniadakis, George Em [Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912 (United States)], E-mail: gk@dam.brown.edu

    2008-11-20T23:59:59.000Z

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  3. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    SciTech Connect (OSTI)

    Barriere, M.T.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Bley, D.C. [PLG, Inc., Newport Beach, CA (United States); Ramey-Smith, A. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-08-01T23:59:59.000Z

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  4. Numerical study of the effect of normalised window size, sampling frequency, and noise level on short time Fourier transform analysis

    SciTech Connect (OSTI)

    Ota, T. A. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom)] [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom)

    2013-10-15T23:59:59.000Z

    Photonic Doppler velocimetry, also known as heterodyne velocimetry, is a widely used optical technique that requires the analysis of frequency modulated signals. This paper describes an investigation into the errors of short time Fourier transform analysis. The number of variables requiring investigation was reduced by means of an equivalence principle. Error predictions, as the number of cycles, samples per cycle, noise level, and window type were varied, are presented. The results were found to be in good agreement with analytical models.

  5. Tracking granules at the Sun's surface and reconstructing velocity fields. II. Error analysis

    E-Print Network [OSTI]

    R. Tkaczuk; M. Rieutord; N. Meunier; T. Roudier

    2007-07-13T23:59:59.000Z

    The determination of horizontal velocity fields at the solar surface is crucial to understanding the dynamics and magnetism of the convection zone of the sun. These measurements can be done by tracking granules. Tracking granules from ground-based observations, however, suffers from the Earth's atmospheric turbulence, which induces image distortion. The focus of this paper is to evaluate the influence of this noise on the maps of velocity fields. We use the coherent structure tracking algorithm developed recently and apply it to two independent series of images that contain the same solar signal. We first show that a k-\\omega filtering of the times series of images is highly recommended as a pre-processing to decrease the noise, while, in contrast, using destretching should be avoided. We also demonstrate that the lifetime of granules has a strong influence on the error bars of velocities and that a threshold on the lifetime should be imposed to minimize errors. Finally, although solar flow patterns are easily recognizable and image quality is very good, it turns out that a time sampling of two images every 21 s is not frequent enough, since image distortion still pollutes velocity fields at a 30% level on the 2500 km scale, i.e. the scale on which granules start to behave like passive scalars. The coherent structure tracking algorithm is a useful tool for noise control on the measurement of surface horizontal solar velocity fields when at least two independent series are available.

  6. Two-Sample Testing in High Dimension and a Smooth Block Bootstrap for Time Series

    E-Print Network [OSTI]

    Gregory, Karl Bruce

    2014-06-12T23:59:59.000Z

    This document contains three sections. The first two present new methods for two-sample testing where there are many variables of interest and the third presents a new methodology for time series bootstrapping. In the first section we develop a...

  7. attach packaging methodologies: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Thermal Design Methodology for Low Flow Rate Single-Phase and Two-Phase Micro-Channel Heat Sinks Scott W for both single-phase and two-phase micro-channel heat sinks under a...

  8. A methodology to assess cost implications of automotive customization

    E-Print Network [OSTI]

    Fournier, Laëtitia

    2005-01-01T23:59:59.000Z

    This thesis focuses on determining the cost of customization for different components or groups of components of a car. It offers a methodology to estimate the manufacturing cost of a complex system such as a car. This ...

  9. A Methodology to Measure Retrofit Energy Savings in Commercial Buildings 

    E-Print Network [OSTI]

    Kissock, John Kelly

    2008-01-16T23:59:59.000Z

    . This dissertation develops a methodology to measure retrofit energy savings and the uncertainty of the savings in commercial buildings. The functional forms of empirical models of cooling and heating energy use in commercial buildings are derived from an engineering...

  10. Robotic Airship Trajectory Tracking Control Using a Backstepping Methodology

    E-Print Network [OSTI]

    Papadopoulos, Evangelos

    Robotic Airship Trajectory Tracking Control Using a Backstepping Methodology Filoktimon Repoulias- loop trajectory tracking controller for an underactuated robotic airship having 6 degrees of freedom and the controller corrects the vehicle's trajectory successfully too. I. INTRODUCTION OBOTIC (autonomous) airships

  11. Protein MAS NMR methodology and structural analysis of protein assemblies

    E-Print Network [OSTI]

    Bayro, Marvin J

    2010-01-01T23:59:59.000Z

    Methodological developments and applications of solid-state magic-angle spinning nuclear magnetic resonance (MAS NMR) spectroscopy, with particular emphasis on the analysis of protein structure, are described in this thesis. ...

  12. Hydrogen Goal-Setting Methodologies Report to Congress

    Fuel Cell Technologies Publication and Product Library (EERE)

    DOE's Hydrogen Goal-Setting Methodologies Report to Congress summarizes the processes used to set Hydrogen Program goals and milestones. Published in August 2006, it fulfills the requirement under se

  13. Spent fuel management fee methodology and computer code user's manual.

    SciTech Connect (OSTI)

    Engel, R.L.; White, M.K.

    1982-01-01T23:59:59.000Z

    The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

  14. PROJECT SELF-EVALUATION METHODOLOGY: THE HEALTHREATS PROJECT CASE STUDY

    E-Print Network [OSTI]

    Bohanec, Marko

    PROJECT SELF-EVALUATION METHODOLOGY: THE HEALTHREATS PROJECT CASE STUDY Martin Znidarsic1 , Marko, Slovenia e-mail: martin.znidarsic@ijs.si Tel: +386 1 477 3366; fax: +386 1 477 3315 ABSTRACT The paper

  15. Transmission Cost Allocation Methodologies for Regional Transmission Organizations

    SciTech Connect (OSTI)

    Fink, S.; Rogers, J.; Porter, K.

    2010-07-01T23:59:59.000Z

    This report describes transmission cost allocation methodologies for transmission projects developed to maintain or enhance reliability, to interconnect new generators, or to access new resources and enhance competitive bulk power markets, otherwise known as economic transmission projects.

  16. AIAA 2001-1535 A SYMBOLIC METHODOLOGY FOR THE

    E-Print Network [OSTI]

    Patil, Mayuresh

    , wind turbines, etc. Over the last decade the advent of composites and the pursuit to build lighter is applied to a Horizontal-Axis Wind Turbine. The pa- per presents a new methodology for modeling

  17. DOE 2009 Geothermal Risk Analysis: Methodology and Results (Presentation)

    SciTech Connect (OSTI)

    Young, K. R.; Augustine, C.; Anderson, A.

    2010-02-01T23:59:59.000Z

    This presentation summarizes the methodology and results for a probabilistic risk analysis of research, development, and demonstration work-primarily for enhanced geothermal systems (EGS)-sponsored by the U.S. Department of Energy Geothermal Technologies Program.

  18. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    1984-06-01T23:59:59.000Z

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  19. Phenanthropiperidine Alkaloids: Methodology Development, Synthesis and Biological Evaluation

    E-Print Network [OSTI]

    Niphakis, Micah James

    2010-04-08T23:59:59.000Z

    This work is directed towards the development of safe phenanthropiperidines for the treatment of cancer. It focuses on synthetic methodologies that facilitate their preparation and biological studies to better understand ...

  20. Software Interoperability Tools: Standardized Capability-Profiling Methodology ISO16100

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Software Interoperability Tools: Standardized Capability-Profiling Methodology ISO16100 Michiko, qwang@seu.ac.jp Abstract. The ISO 16100 series has been developed for Manufacturing software for developing general software applications including enterprise applications. In this paper, ISO 16100

  1. accident risks methodology: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    13 14 15 16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Design Basis Accident Radiological Assessment Calculational Methodology CiteSeer Summary: submitted revised...

  2. Economic and Financial Methodology for South Texas Irrigation Projects – RGIDECON©

    E-Print Network [OSTI]

    Rister, M. Edward; Rogers, Callie S.; Lacewell, Ronald; Robinson, John; Ellis, John; Sturdivant, Allen

    agencies; Debbie Helstrom, Jeff Walker, and Nick Palacios. These engineers with the Texas Water Development Board (TWDB) have provided valuable feedback on the methodology and data as well as insights on accommodating the requirements... and Financial Methodology August 2009 page 18 of 29 Helstrom, Debbie. Project Engineer, Texas Water Development Board, Austin, TX. Personal communications, Spring - Summer 2002. Infoplease.com. "Conversion Factors." ? 2002 Family Education Network. http...

  3. A methodological approach to the complexity measurement of software designs

    E-Print Network [OSTI]

    Williams, Clay Edwin

    1990-01-01T23:59:59.000Z

    A METHODOLOGICAL APPROACH TO THE COMPLEXITY MEASUREMENT OF SOFTWARE DESIGNS A Thesis by CLAY EDWIN WILLIAMS Submitted to the Office of Graduate Studies of Texas AkM University in partial fulffilment of the requirements for the degree... of MASTER OF SCIENCE December 1990 Major Subject: Computer Science A METHODOLOGICAL APPROACH TO THE COMPLEXITY MEASUREMENT OF SOFTWARE DESIGNS A Thesis by CLAY EDWIN WILLIAMS Approved as to style and content by: Willi m M. L' (Co-Chair of C ittee...

  4. A methodology of mathematical models with an application

    E-Print Network [OSTI]

    Wood, Richard Brian

    1972-01-01T23:59:59.000Z

    A METHODOLOGY OF MATHEMATICAL MODELS WITH AN APPLICATION A Thesis by RICHARD BRIAN WOOD Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE December 1972... Major Subject: Mathematics A METHODOLOGY OF MATHEMATICAL MODELS WITH AN APPLICATION A Thesis by RICHARD BRIAN WOOD Approved as to style and content by: (Chairman of Committee) (Head of Department) (Member) (Member) December 1972 ABSTRACT A...

  5. Economic Methodology for South Texas Irrigation Projects - RGIDECON

    E-Print Network [OSTI]

    Ellis, John R.; Robinson, John R.C.; Sturdivant, Allen W.; Lacewell, Ronald D.; Rister, M. Edward

    Methodology October 31, 2002 page 10 of 28 free component for time preference, a risk premium, and an inflation premium3 (Rister et al. 1999). The relationship between these three components is considered multiplicative (Leatham; Hamilton), i.e., the overall...TR-203 October 2002 Economic Methodology for South Texas Irrigation Projects – RGIDECON© M. Edward Rister Ronald D. Lacewell John R. C. Robinson John R. Ellis Allen W. Sturdivant Department of Agricultural Economics Texas Agricultural Experiment...

  6. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27T23:59:59.000Z

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.

  7. Acceptance sampling using judgmental and randomly selected samples

    SciTech Connect (OSTI)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01T23:59:59.000Z

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  8. Evaluation of Near Field Atmospheric Dispersion Around Nuclear Facilities Using a Lorentzian Distribution Methodology

    SciTech Connect (OSTI)

    Gavin Hawkley

    2014-12-01T23:59:59.000Z

    Abstract: Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.

  9. Sample Residential Program Term Sheet

    Broader source: Energy.gov [DOE]

    A sample for defining and elaborating on the specifics of a clean energy loan program. Author: U.S. Department of Energy

  10. IWTU Process Sample Analysis Report

    SciTech Connect (OSTI)

    Nick Soelberg

    2013-04-01T23:59:59.000Z

    CH2M-WG Idaho (CWI) requested that Battelle Energy Alliance (BEA) analyze various samples collected during June – August 2012 at the Integrated Waste Treatment Facility (IWTU). Samples of IWTU process materials were collected from various locations in the process. None of these samples were radioactive. These samples were collected and analyzed to provide more understanding of the compositions of various materials in the process during the time of the process shutdown that occurred on June 16, 2012, while the IWTU was in the process of nonradioactive startup.

  11. Probabilistic growth of large entangled states with low error accumulation

    E-Print Network [OSTI]

    Yuichiro Matsuzaki; Simon C Benjamin; Joseph Fitzsimons

    2009-08-03T23:59:59.000Z

    The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.

  12. Method and system for reducing errors in vehicle weighing systems

    DOE Patents [OSTI]

    Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)

    2010-08-24T23:59:59.000Z

    A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.

  13. On the Fourier Transform Approach to Quantum Error Control

    E-Print Network [OSTI]

    Hari Dilip Kumar

    2012-08-24T23:59:59.000Z

    Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.

  14. Comparison of Wind Power and Load Forecasting Error Distributions: Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.

    2012-07-01T23:59:59.000Z

    The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.

  15. On the efficiency of nondegenerate quantum error correction codes for Pauli channels

    E-Print Network [OSTI]

    Gunnar Bjork; Jonas Almlof; Isabel Sainz

    2009-05-19T23:59:59.000Z

    We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.

  16. Scaling behavior of discretization errors in renormalization and improvement constants

    E-Print Network [OSTI]

    Bhattacharya, T; Lee, W; Sharpe, S R; Bhattacharya, Tanmoy; Gupta, Rajan; Lee, Weonjong; Sharpe, Stephen R.

    2006-01-01T23:59:59.000Z

    Non-perturbative results for improvement and renormalization constants needed for on-shell and off-shell O(a) improvement of bilinear operators composed of Wilson fermions are presented. The calculations have been done in the quenched approximation at beta=6.0, 6.2 and 6.4. To quantify residual discretization errors we compare our data with results from other non-perturbative calculations and with one-loop perturbation theory.

  17. Error message recording and reporting in the SLC control system

    SciTech Connect (OSTI)

    Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

    1985-04-01T23:59:59.000Z

    Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.

  18. Error message recording and reporting in the SLC control system

    SciTech Connect (OSTI)

    Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

    1985-10-01T23:59:59.000Z

    Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.

  19. Topics in measurement error and missing data problems

    E-Print Network [OSTI]

    Liu, Lian

    2009-05-15T23:59:59.000Z

    reasons. In this research, the impact of missing genotypes is investigated for high resolution combined linkage and association mapping of quantitative trait loci (QTL). We assume that the genotype data are missing completely at random (MCAR). Two... and asymptotic properties. In the genetics study, a new method is proposed to account for the missing genotype in a combined linkage and association study. We have concluded that this method does not improve power but it will provide better type I error rates...

  20. Hazard Sampling Dialog General Layout

    E-Print Network [OSTI]

    Zhang, Tao

    1 Hazard Sampling Dialog General Layout The dialog's purpose is to display information about the hazardous material being sampled by the UGV so either the system or the UV specialist can identify the risk level of the hazard. The dialog is associated with the hazmat reading icons (Table 1). Components

  1. Database Sampling with Functional Dependencies

    E-Print Network [OSTI]

    Riera, Jesús Bisbal

    Database Sampling with Functional Dependencies Jes´us Bisbal, Jane Grimson Department of Computer there is a need to prototype the database which the applications will use when in operation. A prototype database can be built by sampling data from an existing database. Including relevant semantic information when

  2. BLOOD SAMPLING SYSTEM TROUBLESHOOTING TIPS

    E-Print Network [OSTI]

    Kay, Mark A.

    SAFESET TM BLOOD SAMPLING SYSTEM SAFESETTM TROUBLESHOOTING TIPS TO PREVENT BLOOD BACKING UP IN LINE that all air bubbles have been eliminated when priming o Invert and tap blood sampling ports to remove air volume o Reinfuse the patient's blood slowly, no faster than 1mL per second, by pressing the plunger back

  3. Runtime Detection of C-Style Errors in UPC Code

    SciTech Connect (OSTI)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29T23:59:59.000Z

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  4. Sample push-out fixture

    DOE Patents [OSTI]

    Biernat, John L. (Scotia, NY)

    2002-11-05T23:59:59.000Z

    This invention generally relates to the remote removal of pelletized samples from cylindrical containment capsules. V-blocks are used to receive the samples and provide guidance to push out rods. Stainless steel liners fit into the v-channels on the v-blocks which permits them to be remotely removed and replaced or cleaned to prevent cross contamination between capsules and samples. A capsule holder securely holds the capsule while allowing manual up/down and in/out movement to align each sample hole with the v-blocks. Both end sections contain identical v-blocks; one that guides the drive out screw and rods or manual push out rods and the other to receive the samples as they are driven out of the capsule.

  5. NSTP 2002-2 Methodology for Final Hazard Categorization for Nuclear...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodology for Final Hazard Categorization for Nuclear Facilities from Category 3 to Radiological (111302). NSTP 2002-2 Methodology for Final Hazard Categorization for...

  6. Analysis of circuit imperfections in BosonSampling

    E-Print Network [OSTI]

    Anthony Leverrier; Raúl García-Patrón

    2014-11-05T23:59:59.000Z

    BosonSampling is a problem where a quantum computer offers a provable speedup over classical computers. Its main feature is that it can be solved with current linear optics technology, without the need for a full quantum computer. In this work, we investigate whether an experimentally realistic BosonSampler can really solve BosonSampling without any fault-tolerance mechanism. More precisely, we study how the unavoidable errors linked to an imperfect calibration of the optical elements affect the final result of the computation. We show that the fidelity of each optical element must be at least $1 - O(1/n^2)$, where $n$ refers to the number of single photons in the scheme. Such a requirement seems to be achievable with state-of-the-art equipment.

  7. ANALYSIS OF THE TANK 5F FINAL CHARACTERIZATION SAMPLES-2011

    SciTech Connect (OSTI)

    Oji, L.; Diprete, D.; Coleman, C.; Hay, M.

    2012-08-03T23:59:59.000Z

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the following: Al-26, Sn-126, Sb-126, Sb-126m, Eu-152 and Cf-249. SRNL, in conjunction with the plant customer, reviewed all these cases and determined that the impacts were negligible.

  8. Analysis Of The Tank 5F Final Characterization Samples-2011

    SciTech Connect (OSTI)

    Oji, L. N.; Diprete, D.; Coleman, C. J.; Hay, M. S.

    2012-09-27T23:59:59.000Z

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the following: Al-26, Sn-126, Sb-126, Sb-126m, Eu-152 and Cf-249. SRNL, in conjunction with the plant customer, reviewed all these cases and determined that the impacts were negligible.

  9. SUBSURFACE MOBILE PLUTONIUM SPECIATION: SAMPLING ARTIFACTS FOR GROUNDWATER COLLOIDS

    SciTech Connect (OSTI)

    Kaplan, D.; Buesseler, K.

    2010-06-29T23:59:59.000Z

    A recent review found several conflicting conclusions regarding colloid-facilitated transport of radionuclides in groundwater and noted that colloids can both facilitate and retard transport. Given these contrasting conclusions and the profound implications even trace concentrations of plutonium (Pu) have on the calculated risk posed to human health, it is important that the methodology used to sample groundwater colloids be free of artifacts. The objective of this study was: (1) to conduct a field study and measure Pu speciation, ({sup 239}Pu and {sup 240}Pu for reduced-Pu{sub aq}, oxidized-Pu{sub aq}, reduced-Pu{sub colloid}, and oxidized-Pu{sub colloid}), in a Savannah River Site (SRS) aquifer along a pH gradient in F-Area, (2) to determine the impact of pumping rate on Pu concentration, Pu speciation, and Pu isotopic ratios, (3) determine the impact of delayed sample processing (as opposed to processing directly from the well).

  10. A methodology to identify material properties in layered visoelastic halfspaces

    E-Print Network [OSTI]

    Torpunuri, Vikram Simha

    1990-01-01T23:59:59.000Z

    UNKNOWN SYSTEM OUTPUT D FORWARD MODEL MODEL: M + \\ CI ~ OUTPUT ERROR a) NOISE INPUT + INPUT ERROR~ UNKNOWN SYSTEM INVERSE MODEL: M I OUTPUT D INVERSE MODEL b) NOISE UNK NOWN SYSTEM OUTPUT GENERALIZE= MODEL Ml M -I 2 c) GENERALIZED... that displacements vary linearly within each sublayer 25 Sensors 0 1 2 3 4 5 6 Layer 1 E' - E'(v) + iE" (v) E?m, t, Layer 2 E' E'(v) (1 + i8) E?8? t, Layer 3 E' E'(8) (1 + i8) E?8? t, Halfspace E* - E'(m) (1 + i8) E? 8? ~ Figure 4 Schematic...

  11. Sample Business Plan Framework 3

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 1: A program seeking to continue operations in the post-grant period as a not-for-profit (NGO) entity.

  12. Sample Business Plan Framework 2

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 1: A program seeking to continue operations in the post-grant period as a not-for-profit (NGO) entity.

  13. Sample Business Plan Framework 4

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 1: A program seeking to continue operations in the post-grant period as a not-for-profit (NGO) entity.

  14. Sample Business Plan Framework 1

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 1: A program seeking to continue operations in the post-grant period as a not-for-profit (NGO) entity.

  15. General Methodology for developing UML models from UI

    E-Print Network [OSTI]

    Reddy, Ch Ram Mohan; Srinivasa, K G; Kumar, T V Suresh; Kanth, K Rajani

    2012-01-01T23:59:59.000Z

    In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case...

  16. Recompile if your codes run into MPICH error after the maintenance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Recompile if your codes run into MPICH errors after the maintenance on 6252014 Recompile if your codes run into MPICH error after the maintenance on 6252014 June 27, 2014 (0...

  17. Design techniques for graph-based error-correcting codes and their applications

    E-Print Network [OSTI]

    Lan, Ching Fu

    2006-04-12T23:59:59.000Z

    -correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...

  18. V-109: Google Chrome WebKit Type Confusion Error Lets Remote...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    9: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code V-109: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code...

  19. T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets...

    Energy Savers [EERE]

    T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute...

  20. Cognitive analysis of students' errors and misconceptions in variables, equations, and functions

    E-Print Network [OSTI]

    Li, Xiaobao

    2009-05-15T23:59:59.000Z

    such issues, three basic algebra concepts - variable, equation, and function – are used to analyze students’ errors, possible buggy algorithms, and the conceptual basis of these errors: misconceptions. Through the research on these three basic concepts...

  1. Short-term energy outlook. Volume 2. Methodology

    SciTech Connect (OSTI)

    Not Available

    1982-05-01T23:59:59.000Z

    This volume updates models and forecasting methodologies used and presents information on new developments since November 1981. Chapter discusses the changes in forecasting methodology for motor gasoline demand, electricity sales, coking coal, and other petroleum products. Coefficient estimates, summary statistics, and data sources for many of the short-term energy models are provided. Chapter 3 evaluates previous short-term forecasts for the macroeconomic variables, total energy, petroleum supply and demand, coal consumption, natural gas, and electricity fuel shares. Chapter 4 reviews the relationship of total US energy consumption to economic activity between 1960 and 1981.

  2. Shared Dosimetry Error in Epidemiological Dose-Response Analyses

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23T23:59:59.000Z

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ?6¼0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the MayakWorker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less

  3. Improved Characterization of Transmitted Wavefront Error on CADB Epoxy-Free Bonded Solid State Laser Materials

    SciTech Connect (OSTI)

    Bayramian, A

    2010-12-09T23:59:59.000Z

    Current state-of-the-art and next generation laser systems - such as those used in the NIF and LIFE experiments at LLNL - depend on ever larger optical elements. The need for wide aperture optics that are tolerant of high power has placed many demands on material growers for such diverse materials as crystalline sapphire, quartz, and laser host materials. For such materials, it is either prohibitively expensive or even physically impossible to fabricate monolithic pieces with the required size. In these cases, it is preferable to optically bond two or more elements together with a technique such as Chemically Activated Direct Bonding (CADB{copyright}). CADB is an epoxy-free bonding method that produces bulk-strength bonded samples with negligible optical loss and excellent environmental robustness. The authors have demonstrated CADB for a variety of different laser glasses and crystals. For this project, they will bond quartz samples together to determine the suitability of the resulting assemblies for large aperture high power laser optics. The assemblies will be evaluated in terms of their transmitted wavefront error, and other optical properties.

  4. Using Graphs for Fast Error Term Approximation of Time-varying Datasets

    SciTech Connect (OSTI)

    Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I

    2003-02-27T23:59:59.000Z

    We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.

  5. T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service

    Broader source: Energy.gov [DOE]

    A remote user can cause the backend server to remain in an error state until the retry timeout expires.

  6. Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors

    E-Print Network [OSTI]

    Sarkar, Abhra

    2014-06-24T23:59:59.000Z

    BAYESIAN SEMIPARAMETRIC DENSITY DECONVOLUTION AND REGRESSION IN THE PRESENCE OF MEASUREMENT ERRORS A Dissertation by ABHRA SARKAR Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment... Copyright 2014 Abhra Sarkar ABSTRACT Although the literature on measurement error problems is quite extensive, so- lutions to even the most fundamental measurement error problems like density de- convolution and regression with errors...

  7. Modified automatic time error control and inadvertent interchange reduction for the WSCC interconnected power systems

    SciTech Connect (OSTI)

    McReynolds, W.L. (Bonneville Power Administration, Vancouver, WA (US)); Badley, D.E. (N.W. Power Pool, Coordinating Office, Portland, OR (US))

    1991-08-01T23:59:59.000Z

    This paper describes an automatic generation control (AGC) system that simultaneously reduces time error and accumulated inadvertent interchange energy in interconnected power system. This method is automatic time error and accumulated inadvertent interchange reduction (AIIR). With this method control areas help correct the system time error when doing so also tends to correct accumulated inadvertent interchange. Thus in one step accumulated inadvertent interchange and system time error are corrected.

  8. Design error diagnosis and correction in digital circuits

    E-Print Network [OSTI]

    Nayak, Debashis

    1998-01-01T23:59:59.000Z

    , each primary output would impose a con- straint on the on-set and off-set. These constraints should be combined together to derive the final on-set and off-set of the new function. Proposition 2: [9, 18, 17] Let i be the index of the primary outputs... to this equation are deleted. The work in [17] is also based on Boolean comparisons and applies to multiple errors. Overall, their method does not guarantee a solution. Test-vector simulation methods proposed for the DEDC problem include [20, 22, 26]. In [20...

  9. An error correcting procedure for imperfect supervised, nonparametric classification

    E-Print Network [OSTI]

    Ferrell, Dennis Ray

    1973-01-01T23:59:59.000Z

    ON INFORMATION THEORY . is active) . I'or simplicity in writing, Pr(B=B. ) will be ab- j breviated by Pr(B. ), and f(x/B=B ) will be abbreviated by j f (x/B. ) . The basic problem is, upon observing x, to determine j which class is active. If complete... to be B , r (x), is r (x) ( L Pr(B /x) i=1 The conditional probability of error can be minimized over j by assigning to a measurement x, the label value B such that minimizes r (x) . The rule which will do this is Bayes rule, b*. The resulting...

  10. Optimum decoding of TCM in the presence of phase errors

    E-Print Network [OSTI]

    Han, Jae Choong

    1990-01-01T23:59:59.000Z

    discussed. Our approach is to assume that intersymbol interference has been effectively removed by the equalizer while the phase tracking scheme has partially removed the phase jitter, in which case the output of the equalizer will have a slowly varying.... The DAL [I] used the decision at the output ol' the Viterbi decoder to demodulate the local c&arrier. The performance degradation of coded 8-PSK when disturbed by recovered carrier phase error and jitter is investigatecl in i'Gi, in which simulation...

  11. Effects of color coding on keying time and errors

    E-Print Network [OSTI]

    Wooldridge, Brenda Gail

    1983-01-01T23:59:59.000Z

    were to determine the effects if any oi' color coding upon the error rate and location time of special func- tion keys on a computer keyboard. An ACT-YA CRT keyboard interfaced with a Kromemco microcomputer was used. There were 84 high schoool... to comnunicate with more and more computer-like devices. The most common computer/human interface is the terminal, consisting of a display screen, and keyboard. The format and layout on the display screen of computer-generated information is generally...

  12. Common Errors and Innovative Solutions Transcript | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't Your Destiny: Theof"Wave the WhiteNational| Department ofCommittee Report forCommon Errors

  13. Adaptive Sampling approach to environmental site characterization: Phase 1 demonstration

    SciTech Connect (OSTI)

    Floran, R.J.; Bujewski, G.E. [Sandia National Labs., Albuquerque, NM (United States); Johnson, R.L. [Argonne National Lab., IL (United States)

    1995-07-01T23:59:59.000Z

    A technology demonstration that optimizes sampling strategies and real-time data collection was carried out at the Kirtland Air Force Base (KAFB) RB-11 Radioactive Burial Site, Albuquerque, New Mexico in August 1994. The project, which was funded by the Strategic Environmental Research and Development Program (SERDP), involved the application of a geostatistical-based Adaptive Sampling methodology and software with on-site field screening of soils for radiation, organic compounds and metals. The software, known as Plume{trademark}, was developed at Argonne National Laboratory as part of the DOE/OTD-funded Mixed Waste Landfill Integrated Demonstration (MWLID). The objective of the investigation was to compare an innovative Adaptive Sampling approach that stressed real-time decision-making with a conventional RCRA-driven site characterization carried out by the Air Force. The latter investigation used a standard drilling and sampling plan as mandated by the Environmental Protection Agency (EPA). To make the comparison realistic, the same contractors and sampling equipment (Geoprobe{reg_sign} soil samplers) were used. In both investigations, soil samples were collected at several depths at numerous locations adjacent to burial trenches that contain low-level radioactive waste and animal carcasses; some trenches may also contain mixed waste. Neither study revealed the presence of contaminants appreciably above risk based action levels, indicating that minimal to no migration has occurred away from the trenches. The combination of Adaptive Sampling with field screening achieved a similar level of confidence compared to the Resource Conservation and Recovery Act (RCRA) investigation regarding the potential migration of contaminants at the site.

  14. Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai

    E-Print Network [OSTI]

    Sahai, Anant

    Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai HP Labs, Palo Alto EECS, UC Berkeley ISIT 2008 Chang (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 1 (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 2 / 14 #12;Stabilizing an unstable

  15. A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang

    E-Print Network [OSTI]

    Shen, Kai

    A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor- tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap

  16. A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang

    E-Print Network [OSTI]

    Shen, Kai

    A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor­ tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap

  17. An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing Systems

    E-Print Network [OSTI]

    Pedram, Massoud

    . INTRODUCTION Soft error resiliency has become a major concern for modern computing systems as CMOS technology systems [8, 9]. Although it is impossible to entirely eliminate spontaneous soft errors, they canAn Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing

  18. Digication Error Message:"Your username is already in use by another account."

    E-Print Network [OSTI]

    Barrash, Warren

    Digication Error Message:"Your username is already in use by another account." You may need you have one). If you receive the error message below, here's how to log into your Digication account. (For example, if the error message appeared when using your employee account, switch to your employee

  19. Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI

    E-Print Network [OSTI]

    Hadjicostis, Christoforos

    Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI Dynamic Systems encoded form and allow error detection and correction to be performed through concurrent parity checks (i that allows parity checks to capture the evolution of errors in the system and, based on non-concurrent parity

  20. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy

    E-Print Network [OSTI]

    Qiuhe Peng; Yiming Hu; Kun Wang; Yu Liang

    2012-01-16T23:59:59.000Z

    Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than $0.55^m$, so we can't decide whether the universe is accelerating expansion or not.