Neutron multiplication error in TRU waste measurements
Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are more realistic and accurate. To do so, measurements of standards and waste drums were performed with High Efficiency Neutron Counters (HENC) located at Los Alamos National Laboratory (LANL). The data were analyzed for multiplication effects and new estimates of the multiplication error were computed. A concluding section will present alternatives for reducing the number of rejections of TRU waste containers due to neutron multiplication error.
MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS
Heinemann, Detlev
commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Ridge Regression Estimation Approach to Measurement Error Model
Shalabh
Ridge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton of the regression parameters is ill conditioned. We consider the Hoerl and Kennard type (1970) ridge regression (RR) modifications of the five quasi- empirical Bayes estimators of the regression parameters of a measurement error
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19
Many statistical models, like measurement error models, a general class of survival models, and a mixture data model with random censoring, are semiparametric where interest lies in estimating finite-dimensional parameters ...
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Optimal Estimation from Relative Measurements: Error Scaling (Extended Abstract)
Hespanha, João Pedro
"relative" measurement between xu and xv is available: uv = xu - xv + u,v Rk , (u, v) E V × V, (1) whereOptimal Estimation from Relative Measurements: Error Scaling (Extended Abstract) Prabir Barooah Jo~ao P. Hespanha I. ESTIMATION FROM RELATIVE MEASUREMENTS We consider the problem of estimating a number
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors
Sarkar, Abhra
2014-06-24
Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with errors-in-covariates are available ...
Measure of Diffusion Model Error for Thermal Radiation Transport
Kumar, Akansha
2013-04-19
and computational time. However, this approximation often has significant error. Error due to the inherent nature of a physics model is called model error. Information about the model error associated with the diffusion approximation is clearly desirable...
Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences
Dominici, Francesca
1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure of air pollution and health. Because measurement error may have substantial implications for interpreting
Measurement and Analysis of the Error Characteristics of an In-Building Wireless Network
Steenkiste, Peter
on fiber or electrical connections have excellent error characteris- tics but that wireless networksMeasurement and Analysis of the Error Characteristics of an In-Building Wireless Network David fdavide,prsg@cs.cmu.edu Abstract There is general belief that networks based on wireless technolo- gies
THE EFFECT OF EXPERIMENTAL ERROR ON BAT PERFORMANCE MEASUREMENTS
Smith, Lloyd V.
of the ball. The coefficient of restitution (COR or e) is a measure of the dissipated energy from impact. Ball is found from the ratio of the rebound, vr, and inbound, vi, speeds as . (1) Balls are also regulated
Investigation of Simple Linear Measurement Error Models (SLMEMS) with Correlated Data
Lu, Ming
2014-12-06
the measurement errors are normally distributed to allow development of likelihood-based methods of inference. Simulated true responses are modeled as a simple linear regression on the true response values. That is, we wish to detect if either additive...
Formalism for Simulation-based Optimization of Measurement Errors in High Energy Physics
Yuehong Xie
2009-04-29
Miminizing errors of the physical parameters of interest should be the ultimate goal of any event selection optimization in high energy physics data analysis involving parameter determination. Quick and reliable error estimation is a crucial ingredient for realizing this goal. In this paper we derive a formalism for direct evaluation of measurement errors using the signal probability density function and large fully simulated signal and background samples without need for data fitting and background modelling. We illustrate the elegance of the formalism in the case of event selection optimization for CP violation measurement in B decays. The implication of this formalism on choosing event variables for data analysis is discussed.
On the Importance of Considering Measurement Errors in a Fuzzy Logic System for Scientific Applications in Nuclear Fusion
Analysis of measurement errors for a superconducting phase qubit Qin Zhang,1 Abraham G. Kofman,1,
Martinis, John M.
Analysis of measurement errors for a superconducting phase qubit Qin Zhang,1 Abraham G. Kofman,1 of a superconducting flux- biased phase qubit. Insufficiently long measurement pulse may lead to nonadiabatic- veloping superconducting Josephson-junction circuits for quantum computation. A wide variety
Schierup, Mikkel Heide
Correction for measurement error from genotyping-by-sequencing in genomic variance and genomic for Quantitative Genetics and Genomics, Department of Molecular Biology and Genetics, Aarhus University Denmark DLF-Trfolium, Store Heddinge, Denmark CENTER FOR QUANTITATIVE GENETICS AND GENOMICS F2 F2 Simulate sequencing Genotype
Tullos, Desiree
DOWNSTREAM CHANNEL CHANGES AFTER A SMALL DAM REMOVAL: USING AERIAL PHOTOS AND MEASUREMENT ERROR and Ecological Engineering, Oregon State University, Corvallis, OR, USA ABSTRACT Dam removal is often implemented to assess downstream channel changes associated with a small dam removal. The Brownsville Dam, a 2.1 m tall
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
A multi-site analysis of random error2 in tower-based measurements of carbon and energy fluxes3
Forest Service, 271 Mast Road, Durham, NH 03824 USA.25 #12;RANDOM ERRORS IN ENERGY AND CO2 FLUX1 A multi-site analysis of random error2 in tower-based measurements of carbon and energy fluxes3 4 Forest Service, 271 Mast Road, Durham, NH 03824, USA.11 3 LI-COR Biosciences, Inc., 4421 Superior Street
Davidson, R. L.; Earle, G. D.; Heelis, R. A.; Klenzing, J. H.
2010-08-15
Planar retarding potential analyzers (RPAs) have been utilized numerous times on high profile missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellite Program to measure plasma composition, temperature, density, and the velocity component perpendicular to the plane of the instrument aperture. These instruments use biased grids to approximate ideal biased planes. These grids introduce perturbations in the electric potential distribution inside the instrument and when unaccounted for cause errors in the measured plasma parameters. Traditionally, the grids utilized in RPAs have been made of fine wires woven into a mesh. Previous studies on the errors caused by grids in RPAs have approximated woven grids with a truly flat grid. Using a commercial ion optics software package, errors in inferred parameters caused by both woven and flat grids are examined. A flat grid geometry shows the smallest temperature and density errors, while the double thick flat grid displays minimal errors for velocities over the temperature and velocity range used. Wire thickness along the dominant flow direction is found to be a critical design parameter in regard to errors in all three inferred plasma parameters. The results shown for each case provide valuable design guidelines for future RPA development.
Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra
J. B. Whitmore; M. T. Murphy
2014-11-18
We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10 m s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100 \\AA) and entire spectrographs arms ($\\sim$1000--3000 \\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200 m s$^{-1}$ per 1000 \\AA. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, $\\alpha$. The spurious deviations in $\\alpha$ produced by the model closely match important aspects of the VLT--UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in $\\alpha$ from quasar absorption lines.
Buser, Michael Dean
2004-09-30
indicated that current cotton gin emission factors could be over-estimated by about 40%. This over-estimation is a consequence of the relatively large PM associated with cotton gin exhausts. These PM sampling errors are contributing to the misappropriation...
The Measure of Human Error: Direct and Indirect Performance Shaping Factors
Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe
2007-08-01
The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.
Tyler, John E
1967-01-01
metric measurement for correlation with primary productivityMEASUREMENT OF RADIANT ENERGY FOR CORRELATION WITH PRIMARY PRODUCTIVITYMEASUREMENT OF RADIANT ENERGY FOR CORRELATION WITH PRIMARY PRODUCTIVITY.
Lobach, Iryna
2009-05-15
to illustrate our approach in this simple case. We assumed that environmental variables (X;W), genetic variant (G), and disease status (D) are binary. Given the values of (G;X) we generated a binary disease outcome D from the following logistic model logitfpr... with error with misclassi cation probabili- ties pr(W = 0jX = 1) = 0:20 and pr(W = 1jX = 0) = 0:10. The results are based on a simulation study with 500 replications for 1000 cases and 1000 controls. Logistic Retrospective Semiparametric Parameter True Value...
Automated suppression of errors in LTP-II slope measurements with x-ray optics
Ali, Zulfiqar
2011-01-01
slope measurements with x-ray optics Zulfiqar Ali, Curtis L.with state-of-the-art x-ray optics. Significant suppressionscanning, metrology of x-ray optics, deflectometry Abstract
Error analysis of pose measurement from sonic sensors without using speed of sound information
Lai, Chih-Chien
1999-01-01
Scott Burnett (1) demonstrated the feasibility of using acoustic sensors to locate an object without information about speed of sound. The algorithms of triangulation and pose measurement, which were introduced in his paper to fulfill the goal...
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
Richardson, Andrew D.
68504, USA d Department of Meteorology, Penn State University, 512 Walker Building, University Park, PA error for H are small, in contrast to both LE and FCO2, for which the random errors are roughly three with increasing wind speed. Data from two sites suggest that FCO2 random error may be slightly smaller when
Petriu, Emil M.
. This is a very limited application that only considers a docking station. A helicopter fuzzy controller developed by Cavalcante in Florianopolis, Brazil [4], decomposes the movements of a helicopter into four separate blocks inputs represent errors and error deviations. The fuzzy output represents corrections to the helicopter
Reversible (unitary) Ancillary qbits Controlled gates (cX, cZ) #12;Measurement Deterministic Duplication;Decoding use ancillary bits to determine what error occurred #12;Decoding use ancillary bits to determine what error occurred set to 0 if first two bits equal, set to 1 if not #12;Decoding use ancillary bits
Shirasaki, Masato; Yoshida, Naoki
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ?1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of ?w {sub 0} ? 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density ?{sub m0}=0.256±{sub 0.046}{sup 0.054}.
Ewers, Brent E.
- tively. Sap flux measured in stems did not lag JS measured in branches, and time and frequency domain. Introduction Stomata respond to environmental variation, regulate water loss and carbon dioxide gain, and thus biosphereatmosphere exchange of mass and energy. From porometry measure- ments, leaf conductance (gS) can
Oren, Ram
) from sap flux (JS) measured with Granier-type sensors, and in calculating canopy stomatal conductance biosphereatmosphere exchange of mass and energy. From porometry measure- ments, leaf conductance (gS) can are similar, a condition that occurs for small leaves exposed to a sufficiently high wind speed (Herbst 1995
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
RESEARCH ARTICLE Minimization of divergence error in volumetric velocity
Marusic, Ivan
RESEARCH ARTICLE Minimization of divergence error in volumetric velocity measurements Volumetric velocity measurements taken in incompressible fluids are typically hindered by a nonzero
Remarks on statistical errors in equivalent widths
Klaus Vollmann; Thomas Eversberg
2006-07-03
Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.
Monte Carlo errors with less errors
Ulli Wolff
2006-11-29
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Agility metric sensitivity using linear error theory
Smith, David Matthew
2000-01-01
Aircraft agility metrics have been proposed for use to measure the performance and capability of aircraft onboard while in-flight. The sensitivity of these metrics to various types of errors and uncertainties is not ...
Zhang, Yunpeng; Li, En Guo, Gaofeng; Xu, Jiadi; Wang, Chao
2014-09-15
A pair of spot-focusing horn lens antenna is the key component in a free-space measurement system. The electromagnetic constitutive parameters of a planar sample are determined using transmitted and reflected electromagnetic beams. These parameters are obtained from the measured scattering parameters by the microwave network analyzer, thickness of the sample, and wavelength of a focused beam on the sample. Free-space techniques introduced by most papers consider the focused wavelength as the free-space wavelength. But in fact, the incident wave projected by a lens into the sample approximates a Gaussian beam, thus, there has an elongation of the wavelength in the focused beam and this elongation should be taken into consideration in dielectric and magnetic measurement. In this paper, elongation of the wavelength has been analyzed and measured. Measurement results show that the focused wavelength in the vicinity of the focus has an elongation of 1%–5% relative to the free-space wavelength. Elongation's influence on the measurement result of the permittivity and permeability has been investigated. Numerical analyses show that the elongation of the focused wavelength can cause the increase of the measured value of the permeability relative to traditionally measured value, but for the permittivity, it is affected by several parameters and may increase or decrease relative to traditionally measured value.
Goal-oriendted local a posteriori error estimator for H(div)
2011-12-15
Dec 15, 2011 ... error estimator measures the pollution effect from the outside region of D ... error estimators which account for and quantify the pollution effect.
Ali, Zulfiqar
2013-01-01
measurements with x-ray optics. Part 1: Review of LTP errorsprecise reflective X-ray optics,” Nucl. Inst. and Meth. Ameasurements of x-ray optics. Part 2: Specification for
Thermodynamics of error correction
Pablo Sartori; Simone Pigolotti
2015-04-24
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Abdelhamid Awad Aly Ahmed, Sala
2008-10-10
by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the O–ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major Subject: Computer Science QUANTUM ERROR CONTROL CODES A... Members, Mahmoud M. El-Halwagi Anxiao (Andrew) Jiang Rabi N. Mahapatra Head of Department, Valerie Taylor May 2008 Major Subject: Computer Science iii ABSTRACT Quantum Error Control Codes. (May 2008) Salah Abdelhamid Awad Aly Ahmed, B.Sc., Mansoura...
Error Dynamics: The Dynamic Emergence of Error Avoidance and
Bickhard, Mark H.
. Standard such notions are, however, arguably limited and bad notions, being based on untenable models of learning about error and of handling error knowledge constitute a complex major theme in evolution VICARIANTS Avoiding Error. The central theme is a progressive elaboration of kinds of dynamics that manage
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Error models in quantum computation: an application of model selection
Lucia Schwarz; Steven van Enk
2013-09-04
Threshold theorems for fault-tolerant quantum computing assume that errors are of certain types. But how would one detect whether errors of the "wrong" type occur in one's experiment, especially if one does not even know what type of error to look for? The problem is that for many qubits a full state description is impossible to analyze, and a full process description is even more impossible to analyze. As a result, one simply cannot detect all types of errors. Here we show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. The example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size.
Fault-Tolerant Error Correction with the Gauge Color Code
Benjamin J. Brown; Naomi H. Nickerson; Dan E. Browne
2015-08-03
The gauge color code is a quantum error-correcting code with local syndrome measurements that, remarkably, admits a universal transversal gate set without the need for resource-intensive magic state distillation. A result of recent interest, proposed by Bomb\\'{i}n, shows that the subsystem structure of the gauge color code admits an error-correction protocol that achieves tolerance to noisy measurements without the need for repeated measurements, so called single-shot error correction. Here, we demonstrate the promise of single-shot error correction by designing a two-part decoder and investigate its performance. We simulate fault-tolerant error correction with the gauge color code by repeatedly applying our proposed error-correction protocol to deal with errors that occur continuously to the underlying physical qubits of the code over the duration that quantum information is stored. We estimate a sustainable error rate, i.e. the threshold for the long time limit, of $ \\sim 0.31\\%$ for a phenomenological noise model using a simple decoding algorithm.
Photometric Redshifts and Photometry Errors
D. Wittman; P. Riechers; V. E. Margoniner
2007-09-21
We examine the impact of non-Gaussian photometry errors on photometric redshift performance. We find that they greatly increase the scatter, but this can be mitigated to some extent by incorporating the correct noise model into the photometric redshift estimation process. However, the remaining scatter is still equivalent to that of a much shallower survey with Gaussian photometry errors. We also estimate the impact of non-Gaussian errors on the spectroscopic sample size required to verify the photometric redshift rms scatter to a given precision. Even with Gaussian {\\it photometry} errors, photometric redshift errors are sufficiently non-Gaussian to require an order of magnitude larger sample than simple Gaussian statistics would indicate. The requirements increase from this baseline if non-Gaussian photometry errors are included. Again the impact can be mitigated by incorporating the correct noise model, but only to the equivalent of a survey with much larger Gaussian photometry errors. However, these requirements may well be overestimates because they are based on a need to know the rms, which is particularly sensitive to tails. Other parametrizations of the distribution may require smaller samples.
Using error correction to determine the noise model
M. Laforest; D. Simon; J. -C. Boileau; J. Baugh; M. Ditty; R. Laflamme
2007-01-25
Quantum error correcting codes have been shown to have the ability of making quantum information resilient against noise. Here we show that we can use quantum error correcting codes as diagnostics to characterise noise. The experiment is based on a three-bit quantum error correcting code carried out on a three-qubit nuclear magnetic resonance (NMR) quantum information processor. Utilizing both engineered and natural noise, the degree of correlations present in the noise affecting a two-qubit subsystem was determined. We measured a correlation factor of c=0.5+/-0.2 using the error correction protocol, and c=0.3+/-0.2 using a standard NMR technique based on coherence pathway selection. Although the error correction method demands precise control, the results demonstrate that the required precision is achievable in the liquid-state NMR setting.
DATA COMPRESSION USING WAVELETS: ERROR ...
1910-90-11
algorithms that introduce differences between the original and compressed data in ... to choose an error metric that parallels the human visual system, so that image .... signal data along a communications channel, one sends integer codes that ...
The Challenge of Quantum Error Correction.
Fominov, Yakov
in the design of physical bits. #12;What we need Hardware requirements: 1. Many 103-104 / R individual bits (R flip classical error b. Phase error 0exp( ( ) )z i E t dt = - Fluctuates 1. Need hardware error #12;Classical error correction by the software and hardware. , / 2 0 Hardware error correction: Ising
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Communication error detection using facial expressions
Wang, Sy Bor, 1976-
2008-01-01
Automatic detection of communication errors in conversational systems typically rely only on acoustic cues. However, perceptual studies have indicated that speakers do exhibit visual communication error cues passively ...
Error Reduction for Weigh-In-Motion
Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T
2009-01-01
Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).
Catastrophic photometric redshift errors: Weak-lensing survey requirements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymore »the photometric redshifts with z ? 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs – zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.
LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.
2004-07-26
We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore »rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. « less
Reducing collective quantum state rotation errors with reversible dephasing
Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.
2014-12-29
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21?dB in the context of collective population measurements of the spin states of an ensemble of 2.1×10{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Repeated quantum error correction on a continuously encoded qubit by real-time feedback
Julia Cramer; Norbert Kalb; M. Adriaan Rol; Bas Hensen; Machiel S. Blok; Matthew Markham; Daniel J. Twitchen; Ronald Hanson; Tim H. Taminiau
2015-08-06
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits, so that errors can be detected without affecting the encoded state. To be compatible with universal fault-tolerant computations, it is essential that the states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected qubit using a diamond quantum processor. We encode a logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements using an ancilla electron spin, and apply corrections on the encoded state by real-time feedback. The actively error-corrected qubit is robust against errors and multiple rounds of error correction prevent errors from accumulating. Moreover, by correcting phase errors naturally induced by the environment, we demonstrate that encoded quantum superposition states are preserved beyond the dephasing time of the best physical qubit used in the encoding. These results establish a powerful platform for the fundamental investigation of error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Kernel Regression in the Presence of Correlated Errors Kernel Regression in the Presence in nonparametric regression is difficult in the presence of correlated errors. There exist a wide variety vector machines for regression. Keywords: nonparametric regression, correlated errors, bandwidth choice
Time reversal in thermoacoustic tomography - an error estimate
Hristova, Yulia
2008-01-01
The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.
The contour method cutting assumption: error minimization and correction
Prime, Michael B; Kastengren, Alan L
2010-01-01
The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
Characterization of quantum dynamics using quantum error correction
S. Omkar; R. Srikanth; S. Banerjee
2015-01-27
Characterizing noisy quantum processes is important to quantum computation and communication (QCC), since quantum systems are generally open. To date, all methods of characterization of quantum dynamics (CQD), typically implemented by quantum process tomography, are \\textit{off-line}, i.e., QCC and CQD are not concurrent, as they require distinct state preparations. Here we introduce a method, "quantum error correction based characterization of dynamics", in which the initial state is any element from the code space of a quantum error correcting code that can protect the state from arbitrary errors acting on the subsystem subjected to the unknown dynamics. The statistics of stabilizer measurements, with possible unitary pre-processing operations, are used to characterize the noise, while the observed syndrome can be used to correct the noisy state. Our method requires at most $2(4^n-1)$ configurations to characterize arbitrary noise acting on $n$ qubits.
Structure of minimum-error quantum state discrimination
Joonwoo Bae
2013-07-19
Distinguishing different quantum states is a fundamental task having practical applications for information processing. Despite the efforts devoted so far, however, strategies for optimal discrimination are known only for specific examples. We here consider the problem of minimum-error quantum state discrimination where the average error is attempted to be minimized. We show the general structure of minimum-error state discrimination as well as useful properties to derive analytic solutions. Based on the general structure, we present a geometric formulation of the problem, which can be applied to cases where quantum state geometry is clear. We also introduce equivalent classes of sets of quantum states in terms of minimum-error discrimination: sets of quantum states in an equivalence class share the same guessing probability. In particular, for qubit states where the state geometry is found with the Bloch sphere, we illustrate that for an arbitrary set of qubit states, the minimum-error state discrimination with equal prior probabilities can be analytically solved, that is, optimal measurement and the guessing probability are explicitly obtained.
Evaluating specific error characteristics of microwave-derived cloud liquid water products
Christopher, Sundar A.
of cloud LWP products globally using concurrent data from visible/ infrared satellite sensors. The approachEvaluating specific error characteristics of microwave-derived cloud liquid water products Thomas J microwave satellite measurements. Using coincident visible/infrared satellite data, errors are isolated
Impact of Turbulence Closures and Numerical Errors for the Optimization of Flow Control Devices
Paris-Sud XI, Université de
Impact of Turbulence Closures and Numerical Errors for the Optimization of Flow Control Devices J the use of a Kriging-based global optimization method to determine optimal control parameters conduct an optimization process and measure the impact of numerical and modeling errors on the optimal
Group representations, error bases and quantum codes
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
On a fatal error in tachyonic physics
Edward Kapu?cik
2013-08-10
A fatal error in the famous paper on tachyons by Gerald Feinberg is pointed out. The correct expressions for energy and momentum of tachyons are derived.
Adjoint Error Estimation for Elastohydrodynamic Lubrication
Jimack, Peter
Adjoint Error Estimation for Elastohydrodynamic Lubrication by Daniel Edward Hart Submitted elastohydro- dynamic lubrication (EHL) problems. A functional is introduced, namely the friction
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with ?errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some ?+? combination. Results: 20mm{sup 2} errors with intensity altered by ?20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ?50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though may be detected by visual inspection. This work was not funded by Varian Oncology Systems. Some authors have other work partly funded by Varian Oncology Systems.
WIPP Weatherization: Common Errors and Innovative Solutions Presentati...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
WIPP Weatherization: Common Errors and Innovative Solutions Presentation WIPP Weatherization: Common Errors and Innovative Solutions Presentation This presentation contains...
Distinguishing mixed quantum states: Minimum-error discrimination versus optimum unambiguous discrimination Ulrike Herzog1, * and János A. Bergou2 1 Institut für Physik, Humboldt-Universität zu Berlin 2004) We consider two different optimized measurement strategies for the discrimination
Inference for Model Error Allan Seheult
Oakley, Jeremy
Reservoirs, Model Error, Reification, Thermohaline Circulation. 1 Introduction Mathematical models of complex that the uncertainties associated with both calibrating a mathematical model to observations on a physical system specification exercise of model error with the cosmologists, linked to an extensive analysis of model
Nonparametric Regression with Correlated Errors Jean Opsomer
Wang, Yuedong
Nonparametric Regression with Correlated Errors Jean Opsomer Iowa State University Yuedong Wang Nonparametric regression techniques are often sensitive to the presence of correlation in the errors splines and wavelet regression under correlation, both for short-range and long-range dependence
Characterizing Application Memory Error Vulnerability to
Mutlu, Onur
-reliability memory (HRM) Store error-tolerant data in less-reliable lower-cost memory Store error-vulnerable data an application Observation 2: Data can be recovered by software ·Heterogeneous-Reliability Memory (HRM: Data can be recovered by software ·Heterogeneous-Reliability Memory (HRM) ·Evaluation 4 #12;Server
Measurement Errors and Outliers in Seasonal Unit Root Testing
Haldrup, Niels Prof.; Montanes, Antonio; Sansó, Andreu
2000-01-01
hi4Li|iThiti? UiLu|itiL|*iht? Lh_ih|LU@hh)L|@ThLTih? uihi?i|ih4? i|i TLttM*ii t|i? UiLu|itiL|*ihtci@}@? @TT*)|i t|@|t|hi@|ih_i|@* ? |ihT@Tihc|itiL|*iht@hihi*@|i_|L|iTihL_Luih)
On earthquake predictability measurement: Information score and error diagram
Kagan, Yan Y.
2007-01-01
Although the simple renewal models are widely used forBecause of this the renewal models, analyzed in this work,apply the renewal or similar models (Davis et al. , 1989;
On earthquake predictability measurement: Information score and error diagram
Kagan, Yan Y.
2007-01-01
and time-independent earthquake forecast models for southernand precursor motifs in earthquake networks, Physica A ,has been since the last earthquake, the longer the expected
Measurement Error in Spatial Modeling of Environmental Exposures
Paciorek, Chris
(HRV) and inflammation markers (CRP/IL6) Â Normative aging study: HRV and inflammation markers (CRP/IL6
NOx Measurement Errors in Ammonia-Containing Exhaust | Department...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Detroit, Michigan. Sponsored by the U.S. DOE's EERE FreedomCar and Fuel Partnership and 21st Century Truck Programs. 2006deerhoard.pdf More Documents & Publications Reductant...
On earthquake predictability measurement: Information score and error diagram
Kagan, Yan Y.
2007-01-01
short-term earthquake prediction, Science , 236 , 1563-1567.G. (2006), Testing earthquake prediction methods: \\The WestStrategies in strong earthquake prediction, Phys. Earth
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F.
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
Quantum Error Correction for Quantum Memories
Barbara M. Terhal
2015-04-10
Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.
Simulating Bosonic Baths with Error Bars
Mischa P. Woods; M. Cramer; M. B. Plenio
2015-04-07
We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths the truncation of both, the number of modes and the local Hilbert-space dimensions is necessary. We derive super-exponential Lieb--Robinson-type bounds on the error when restricting the bath to finitely-many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.
Errors and paradoxes in quantum mechanics
D. Rohrlich
2007-08-28
Errors and paradoxes in quantum mechanics, entry in the Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy, ed. F. Weinert, K. Hentschel, D. Greenberger and B. Falkenburg (Springer), to appear
Quantum error-correcting codes and devices
Gottesman, Daniel (Los Alamos, NM)
2000-10-03
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Organizational Errors: Directions for Future Research
Carroll, John Stephen
The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...
Quantifying truncation errors in effective field theory
R. J. Furnstahl; N. Klco; D. R. Phillips; S. Wesolowski
2015-06-03
Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Mei{\\ss}ner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered, and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian
2012-05-01
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
Saeki, Hiroshi Magome, Tamotsu
2014-10-06
To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method was approximately less than several percent in the pressure range from 10{sup ?5} Pa to 10{sup ?8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.
The Impact of Soil Sampling Errors on Variable Rate Fertilization
R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.
Hamlen, Kevin W.
Investigating SANS/CWE Top 25 Programming Errors. 1 Investigating the SANS/CWE Top 25 Programming Errors List Running Title: Investigating SANS/CWE Top 25 Programming Errors. Investigating the SANS;Investigating SANS/CWE Top 25 Programming Errors. 2 Investigating the SANS/CWE Top 25 Programming Errors List
Experimental demonstration of error-insensitive approximate universal-NOT gates
Sang Min Lee; Jeongho Bang; Heonoh Kim; Hyunseok Jeong; Jinhyoung Lee; Han Seb Moon
2014-03-17
We propose and experimentally demonstrate an approximate universal-NOT (U-NOT) operation that is robust against operational errors. In our proposal, the U-NOT operation is composed of stochastic unitary operations represented by the vertices of regular polyhedrons. The operation is designed to be robust against random operational errors by increasing the number of unitary operations (i.e., reference axes). Remarkably, no increase in the total number of measurements nor additional resources are required to perform the U-NOT operation. Our method can be applied in general to reduce operational errors to an arbitrary degree of precision when approximating any anti-unitary operation in a stochastic manner.
Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas
E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team
2010-05-19
The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E × B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.
Error Analysis in Nuclear Density Functional Theory (Journal...
Office of Scientific and Technical Information (OSTI)
Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...
Error Analysis in Nuclear Density Functional Theory (Journal...
Office of Scientific and Technical Information (OSTI)
Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...
Structural power flow measurement
Falter, K.J.; Keltie, R.F.
1988-12-01
Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors were found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.
Optimal error estimates for corrected trapezoidal rules
Talvila, Erik
2012-01-01
Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
Stuttgart, Universität
EGU General Assembly 2014, Vienna, Austria Relative importance of coloured noise vs. model errors produced by time-variable background model errors. In particular, the effects of measurement noise models for every time epoch which provides the observables in the dimension of range acceleration
Characterization and removal of errors due to local magnetic anomalies in directional drilling of Geophysics, Colorado School of Mines Summary Directional drilling has evolved over the last few decades utilizes a technique known as magnetic Measurement While Drilling (MWD). Vector measurements of geomagnetic
Error Analysis of Heat Transfer for Finned-Tube Heat-Exchanger Text-Board
Chen, Y.; Zhang, J.
2006-01-01
In order to reduce the measurement error of heat transfer in water and air side for finned-tube heat-exchanger as little as possible, and design a heat-exchanger test-board measurement system economically, based on the principle of test-board system...
Lateral boundary errors in regional numerical weather
?umer, Slobodan
Lateral boundary errors in regional numerical weather prediction models Author: Ana Car Advisor, they describe evolution of atmospher - weather forecast. Every NWP model solves the same system of equations (1: assoc. prof. dr. Nedjeljka Zagar January 5, 2015 Abstract Regional models are used in many national
Chinese Remaindering with Errors Oded Goldreich
International Association for Cryptologic Research (IACR)
Chinese Remaindering with Errors Oded Goldreich Department of Computer Science Weizmann Institute 02139, USA madhu@mit.edu. z Abstract The Chinese Remainder Theorem states that a positive integer m The Chinese Remainder Theorem states that a positive integer m is uniquely specified by its remainder modulo k
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Makarenkov, Vladimir
- mentaldatarequiresan efficientautomaticroutinefor theselection of hits. Unfortunately, random and systematic errors can
Detecting Soft Errors in Stencil based Computations
Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.
2015-05-06
Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.
Gross error detection in process data
Singh, Gurmeet
1992-01-01
, 1991), with many optimum properties, seems to have been untapped by chemical engineers. We first review the background of the Tr test, and present relevant properties of the test. IV. A Hotelling's Generalization of Students t Test One of the most...: Chemical Engineering GROSS ERROR DETECTION IN PROCESS DATA A Thesis by GURMEET SINGH Approved as to style and content by: Ralph E. White (Chair of Committee) Michael Nikoloau (Member Richard B. Gri n (Member) R. W. Flummerfelt (Head...
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-25
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
Discrimination with error margin between two states - Case of general occurrence probabilities -
H. Sugimoto; T. Hashimoto; M. Horibe; A. Hayashi
2009-11-18
We investigate a state discrimination problem which interpolates minimum-error and unambiguous discrimination by introducing a margin for the probability of error. We closely analyze discrimination of two pure states with general occurrence probabilities. The optimal measurements are classified into three types. One of the three types of measurement is optimal depending on parameters (occurrence probabilities and error margin). We determine the three domains in the parameter space and the optimal discrimination success probability in each domain in a fully analytic form. It is also shown that when the states to be discriminated are multipartite, the optimal success probability can be attained by local operations and classical communication. For discrimination of two mixed states, an upper bound of the optimal success probability is obtained.
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
Decoherence and dephasing errors caused by the dc Stark effect...
Office of Scientific and Technical Information (OSTI)
Decoherence and dephasing errors caused by the dc Stark effect in rapid ion transport Citation Details In-Document Search Title: Decoherence and dephasing errors caused by the dc...
Human error contribution to nuclear materials-handling events
Sutton, Bradley (Bradley Jordan)
2007-01-01
This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...
Forward Error Correction and Functional Programming
Bull, Tristan Michael
2011-04-25
.1 Annapolis Micro Wildstar 5 DDR2 DRAM Interface . . . . . . . . 50 6.2 Dual-Port DRAM Wrapper . . . . . . . . . . . . . . . . . . . . . 52 6.3 Kansas Lava DRAM Interface . . . . . . . . . . . . . . . . . . . . 55 7 Conclusion 58 7.1 Future Work... codewords. We ran the simulation using input data with energy per bit to noise power spectral density ratios (Eb=N0) of 3dB to 6dB in 0.5dB increments. For each Eb=N0 value, we ran the simulation until at least 25,000 bit errors were recorded. Results...
Unitary-process discrimination with error margin
T. Hashimoto; A. Hayashi; M. Hayashi; M. Horibe
2010-06-10
We investigate a discrimination scheme between unitary processes. By introducing a margin for the probability of erroneous guess, this scheme interpolates the two standard discrimination schemes: minimum-error and unambiguous discrimination. We present solutions for two cases. One is the case of two unitary processes with general prior probabilities. The other is the case with a group symmetry: the processes comprise a projective representation of a finite group. In the latter case, we found that unambiguous discrimination is a kind of "all or nothing": the maximum success probability is either 0 or 1. We also closely analyze how entanglement with an auxiliary system improves discrimination performance.
On the Error in QR Integration
Dieci, Luca; Van Vleck, Erik
2008-03-07
Society for Industrial and Applied Mathematics Vol. 46, No. 3, pp. 1166–1189 ON THE ERROR IN QR INTEGRATION? LUCA DIECI† AND ERIK S. VAN VLECK‡ Abstract. An important change of variables for a linear time varying system x? = A(t)x, t ? 0, is that induced...(X) is the matrix comprising the diagonal part of X, the rest being all 0’s; upp(X) is the matrix comprising the upper triangular part of X, the rest being all 0’s; and low(X) is the matrix comprising the strictly lower triangular part of X, the rest being all 0’s...
Using CO2 spatial variability to quantify representation errors of satellite CO2 retrievals
Michalak, Anna M.
global data of column- averaged CO2 dry-air mole fraction (XCO2) at high spatial resolutions. These dataUsing CO2 spatial variability to quantify representation errors of satellite CO2 retrievals A. A 2008; published 29 August 2008. [1] Satellite measurements of column-averaged CO2 dry- air mole
Automatic detection of dimension errors in spreadsheets Chris Chambers, Martin Erwig
Erwig, Martin
University, USA a r t i c l e i n f o Keywords: Spreadsheet Dimension Unit of measurement Static analysis Inference rule Error detection a b s t r a c t We present a reasoning system for inferring dimension information in spreadsheets. This system can be used to check the consistency of spreadsheet formulas and thus
Quantifying Errors Associated with Satellite Sampling of Offshore Wind S.C. Pryor1,2
1 Quantifying Errors Associated with Satellite Sampling of Offshore Wind Speeds S.C. Pryor1,2 , R, Bloomington, IN47405, USA. Tel: 1-812-855-5155. Fax: 1-812-855-1661 Email: spryor@indiana.edu 2 Dept. of Wind an attractive proposition for measuring wind speeds over the oceans because in principle they also offer
Bolstered Error Estimation Ulisses Braga-Neto a,c
Braga-Neto, Ulisses
the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian
A Taxonomy of Number Entry Error Sarah Wiseman
Subramanian, Sriram
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
Scheme for precise correction of orbit variation caused by dipole error field of insertion device
Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.
2005-05-15
We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.
Integrating human related errors with technical errors to determine causes behind offshore accidents
Aamodt, Agnar
errors were embedded as an integral part of the oil well drilling opera- tion. To reduce the number assessment of the failure. The method is based on a knowledge model of the oil-well drilling process. All of non-productive time (NPT) during oil-well drilling. NPT exhibits a much lower declining trend than
In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors
Przasnyski, Zbigniew; Seal, Kala Chand
2011-01-01
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Error analysis in cross-correlation of sky maps: application to the ISW detection
Anna Cabre; Pablo Fosalba; Enrique Gaztanaga; Marc Manera
2007-01-15
Constraining cosmological parameters from measurements of the Integrated Sachs-Wolfe effect requires developing robust and accurate methods for computing statistical errors in the cross-correlation between maps. This paper presents a detailed comparison of such error estimation applied to the case of cross-correlation of Cosmic Microwave Background (CMB) and large-scale structure data. We compare theoretical models for error estimation with montecarlo simulations where both the galaxy and the CMB maps vary around a fiducial auto-correlation and cross-correlation model which agrees well with the current concordance LCDM cosmology. Our analysis compares estimators both in harmonic and configuration (or real) space, quantifies the accuracy of the error analysis and discuss the impact of partial sky survey area and the choice of input fiducial model on dark-energy constraints. We show that purely analytic approaches yield accurate errors even in surveys that cover only 10% of the sky and that parameter constraints strongly depend on the fiducial model employed. Alternatively, we discuss the advantages and limitations of error estimators that can be directly applied to data. In particular, we show that errors and covariances from the Jack-Knife method agree well with the theoretical approaches and simulations. We also introduce a novel method in real space that is computationally efficient and can be applied to real data and realistic survey geometries. Finally, we present a number of new findings and prescriptions that can be useful for analysis of real data and forecasts, and present a critical summary of the analyses done to date.
Quantum Error Correction with magnetic molecules
José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas
2014-08-22
Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.
Gross error detection and stage efficiency estimation in a separation process
Serth, R.W.; Srikanth, B. . Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. . Dept. of Chemical and Process Engineering)
1993-10-01
Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.
Huang, Weidong
2011-01-01
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Ghezzehei, T.A.
2008-05-29
Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.
State discrimination with error margin and its locality
A. Hayashi; T. Hashimoto; M. Horibe
2008-07-10
There are two common settings in a quantum-state discrimination problem. One is minimum-error discrimination where a wrong guess (error) is allowed and the discrimination success probability is maximized. The other is unambiguous discrimination where errors are not allowed but the inconclusive result "I don't know" is possible. We investigate discrimination problem with a finite margin imposed on the error probability. The two common settings correspond to the error margins 1 and 0. For arbitrary error margin, we determine the optimal discrimination probability for two pure states with equal occurrence probabilities. We also consider the case where the states to be discriminated are multipartite, and show that the optimal discrimination probability can be achieved by local operations and classical communication.
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
Camp, Charles H; Cicerone, Marcus T
2015-01-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending ...
Mutual information, bit error rate and security in Wójcik's scheme
Zhanjun Zhang
2004-02-21
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.
Kernel Regression with Correlated Errors K. De Brabanter
Kernel Regression with Correlated Errors K. De Brabanter , J. De Brabanter , , J.A.K. Suykens B: It is a well-known problem that obtaining a correct bandwidth in nonparametric regression is difficult support vector machines for regression. Keywords: nonparametric regression, correlated errors, short
Solving LWE problem with bounded errors in polynomial time
International Association for Cryptologic Research (IACR)
Solving LWE problem with bounded errors in polynomial time Jintai Ding1,2 Southern Chinese call the learning with bounded errors (LWBE) problems, we can solve it with complexity O(nD ). Keywords, this problem corresponds to the learning parity with noise (LPN) problem. There are several ways to solve
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar*
Potkonjak, Miodrag
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar* , Sasha Slijepcevic , Miodrag is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined in order applications, including multi- modal sensor fusion, is to ensure that all of the techniques and tools are error
Error detection through consistency checking Peng Gong* Lan Mu#
Silver, Whendee
Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers
Analysis of Probabilistic Error Checking Procedures on Storage Systems
Chen, Ing-Ray
Analysis of Probabilistic Error Checking Procedures on Storage Systems ING-RAY CHEN AND I.-LING YEN Email: irchen@iie.ncku.edu.tw Conventionally, error checking on storage systems is performed on-the-fly (with probability 1) as the storage system is being accessed in order to improve the reliability
ADJOINT AND DEFECT ERROR BOUNDING AND CORRECTION FOR FUNCTIONAL ESTIMATES
Pierce, Niles A.
decades. Integral functionals also arise in other aerospace areas such as the calculation of radar cross functional that results from residual errors in approximating the solution to the partial differential to handle flows with shocks; numerical experiments confirm 4th order error estimates for a pressure integral
Kinematic Error Correction for Minimally Invasive Surgical Robots
in two likely sources of kinematic error: port displacement and instrument shaft flexion. For a quasi. To reach the surgical site near the chest wall, the instrument shaft applies significant torque to the port, and the instrument shaft to bend. These kinematic errors impair positioning of the robot and cause deviations from
Grid-scale Fluctuations and Forecast Error in Wind Power
G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi
2015-03-29
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Grid-scale Fluctuations and Forecast Error in Wind Power
Bel, G; Toots, M; Bandi, M M
2015-01-01
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq
2010-01-01
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...
Hess-Flores, M
2011-11-10
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.
Protecting PUF Error Correction by Codeword Masking
International Association for Cryptologic Research (IACR)
key generation. While the advantages of PUF- based key extraction and embedding have been shown, such as Radio Frequency Identification (RFID) tags [5], but also for high-security products like smartcards [7, because raw PUF measurements naturally involve a certain amount of noise. Dur- ing an enrollment phase
Errors in Quantitative Image Analysis due to
Rubin, Daniel L.
; Massachusetts General Hospital, Boston, MA Abstract PURPOSE: To evaluate the ability of various software (SW different SW tools to measure compartment-specific region-of-interest intensity. RESULTS: Images generated for by the majority of tested quantitative image analysis SW tools. Incorrect image scaling leads to intensity
Quantum Error Correcting Codes and the Security Proof of the BB84 Protocol
Ramesh Bhandari
2014-08-30
We describe the popular BB84 protocol and critically examine its security proof as presented by Shor and Preskill. The proof requires the use of quantum error correcting codes called the Calderbank-Shor-Steanne (CSS) quantum codes. These quantum codes are constructed in the quantum domain from two suitable classical linear codes, one used to correct for bit-flip errors and the other for phase-flip errors. Consequently, as a prelude to the security proof, the report reviews the essential properties of linear codes, especially the concept of cosets, before building the quantum codes that are utilized in the proof. The proof considers a security entanglement-based protocol, which is subsequently reduced to a "Prepare and Measure" protocol similar in structure to the BB84 protocol, thus establishing the security of the BB84 protocol. The proof, however, is not without assumptions, which are also enumerated. The treatment throughout is pedagogical, and this report, therefore, serves a useful tutorial for researchers, practitioners, and students, new to the field of quantum information science, in particular, quantum cryptography, as it develops the proof in a systematic manner, starting from the properties of linear codes, and then advancing to the quantum error correcting codes, which are critical to the understanding of the security proof.
Li, T S; Marshall, J L; Tucker, D; Kessler, R; Annis, J; Bernstein, G M; Boada, S; Burke, D L; Finley, D A; James, D J; Kent, S; Lin, H; Marriner, J; Mondrik, N; Nagasawa, D; Rykoff, E S; Scolnic, D; Walker, A R; Wester, W; Abbott, T M C; Allam, S; Benoit-Lévy, A; Bertin, E; Brooks, D; Capozzi, D; Rosell, A Carnero; Kind, M Carrasco; Carretero, J; Crocce, M; Cunha, C E; D'Andrea, C B; da Costa, L N; Desai, S; Diehl, H T; Doel, P; Flaugher, B; Fosalba, P; Frieman, J; Gaztanaga, E; Goldstein, D A; Gruen, D; Gruendl, R A; Gutierrez, G; Honscheid, K; Kuehn, K; Kuropatkin, N; Maia, M A G; Melchior28, P; Miller, C J; Miquel, R; Mohr, J J; Neilsen, E; Nichol, R C; Nord, B; Ogando, R; Plazas, A A; Romer, A K; Roodman, A; Sako, M; Sanchez, E; Scarpine, V; Schubnell, M; Sevilla-Noarbe, I; Smith, R C; Soares-Santos, M; Sobreira, F; Suchyta, E; Tarle, G; Thomas, D; Vikram, V
2016-01-01
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example...
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-01-13
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages
Xu W.; Lauer,K.; Chu,Y.; Nazaretski,E.
2014-10-02
A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.
A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Xu, W.; Lauer, K.; Chu, Y.; Nazaretski, E.
2014-11-02
A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.
Doolan, P [University College London, London (United Kingdom); Massachusetts General Hospital, Boston, MA (United States); Dias, M [Massachusetts General Hospital, Boston, MA (United States); Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Collins Fekete, C [Massachusetts General Hospital, Boston, MA (United States); Departement de physique, de genie physique et d'optique et Centre de recherche sur le cancer, Universite Laval, Quebec (Canada); Seco, J [Massachusetts General Hospital, Boston, MA (United States)
2014-06-01
Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)
A complete Randomized Benchmarking Protocol accounting for Leakage Errors
T. Chasseur; F. K. Wilhelm
2015-07-09
Randomized Benchmarking allows to efficiently and scalably characterize the average error of an unitary 2-design such as the Clifford group $\\mathcal{C}$ on a physical candidate for quantum computation, as long as there are no non-computational leakage levels in the system. We investigate the effect of leakage errors on Randomized Benchmarking induced from an additional level per physical qubit and provide a modified protocol that allows to derive reliable estimates for the error per gate in their presence. We assess the variance of the sequence fidelity corresponding to the number of random sequences needed for valid fidelity estimation. Our protocol allows for gate dependent error channels without being restricted to perturbations. We show that our protocol is compatible with Interleaved Randomized Benchmarking and expand to benchmarking of arbitrary gates. This setting is relevant for superconducting transmon qubits, among other systems.
Honest Confidence Intervals for the Error Variance in Stepwise Regression
Stine, Robert A.
Honest Confidence Intervals for the Error Variance in Stepwise Regression Dean P. Foster and Robert alternatives are used. These simpler algorithms (e.g., forward or backward stepwise regression) obtain
Servo control booster system for minimizing following error
Wise, William L. (Mountain View, CA)
1985-01-01
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola
Oregon, University of
of the pyranometer to briefly shade the pyranometer once a minute. Direct hori- zontal irradiance is calculated used in programs evaluating the performance of photovoltaic systems, and systematic errors in the data
Error estimation and adaptive mesh refinement for aerodynamic flows
Hartmann, Ralf
Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann1 and Paul Houston, 38108 Braunschweig, Germany Ralf.Hartmann@dlr.de 2 School of Mathematical Sciences University
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS
Hartmann, Ralf
MULTIÂTARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN of Scientific Computing, TU Braunschweig, Germany (Ralf.Hartmann@dlr.de). 1 #12; 2 R. HARTMANN
Error estimation and adaptive mesh refinement for aerodynamic flows
Hartmann, Ralf
Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held), Lilien- thalplatz 7, 38108 Braunschweig, Germany, e-mail: Ralf.Hartmann@dlr.de 1 #12;2 Ralf Hartmann
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS
Hartmann, Ralf
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract, Germany (Ralf.Hartmann@dlr.de). 1 #12;2 R. HARTMANN quantity under consideration. However, in many
Inflated applicants: Attribution errors in performance evaluation by professionals
Swift, Samuel; Moore, Don; Sharek, Zachariah; Gino, Francesca
2013-01-01
performance among applicants from each ‘‘type’’ of school.and interview performance. Each school provided multi-yearschool, PLOS ONE | www.plosone.org July 2013 | Volume 8 | Issue 7 | e69258 Attribution Errors in Performance
Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint
Hodge, B. M.; Milligan, M.
2011-03-01
In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.
On Student's 1908 Article "The Probable Error of a Mean"
Kim, Jong-Min
's "attention" resulted in a report, "The Application of the `Law of Error' to the work of the Brewery" dated No] and other records available in their Dublin brewery"; see Pearson 1939, p. 213.) Unable to find
Performance optimizations for compiler-based error detection
Mitropoulou, Konstantina
2015-06-29
The trend towards smaller transistor technologies and lower operating voltages stresses the hardware and makes transistors more susceptible to transient errors. In future systems, performance and power gains will come ...
Error bars for linear and nonlinear neural network regression models
Penny, Will
Error bars for linear and nonlinear neural network regression models William D. Penny and Stephen J College of Science, Technology and Medicine, London SW7 2BT., U.K. w.penny@ic.ac.uk, s
NOVELTY, CONFIDENCE & ERRORS IN CONNECTIONIST Stephen J. Roberts & William Penny
Roberts, Stephen
d NOVELTY, CONFIDENCE & ERRORS IN CONNECTIONIST SYSTEMS Stephen J. Roberts & William Penny Neural, Technology & Medicine London, UK s.j.roberts@ic.ac.uk, w.penny@ic.ac.uk April 21, 1997 Abstract Key words
Predicting Intentional Tax Error Using Open Source Literature and Data
for each PUMS respondent (or agent), in certain line item/taxpayer categories, allowing us to construct dis-Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Results of Meta-Analysis 12 6 Intentional Error in Line Items/Taxpayer Categories 13 6
Quantum Limits of Measurements and Uncertainty Principle
Masanao Ozawa
2015-05-19
In this paper, we show how the Robertson uncertainty relation gives certain intrinsic quantum limits of measurements in the most general and rigorous mathematical treatment. A general lower bound for the product of the root-mean-square measurement errors arising in joint measurements of noncommuting observables is established. We give a rigorous condition for holding of the standard quantum limit (SQL) for repeated measurements, and prove that if a measuring instrument has no larger root-mean-square preparational error than the root-mean-square measurement errors then it obeys the SQL. As shown previously, we can even construct many linear models of position measurement which circumvent this condition for the SQL.
Radar range measurements in the atmosphere.
Doerry, Armin Walter
2013-02-01
The earth's atmosphere affects the velocity of propagation of microwave signals. This imparts a range error to radar range measurements that assume the typical simplistic model for propagation velocity. This range error is a function of atmospheric constituents, such as water vapor, as well as the geometry of the radar data collection, notably altitude and range. Models are presented for calculating atmospheric effects on radar range measurements, and compared against more elaborate atmospheric models.
Suboptimal quantum-error-correcting procedure based on semidefinite programming
Naoki Yamamoto; Shinji Hara; Koji Tsumura
2006-06-13
In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.
Calculation of the Johann error for spherically bent x-ray imaging crystal spectrometers
Wang, E.; Beiersdorfer, P.; Gu, M.; Bitter, M.; Delgado-Aparicio, L.; Hill, K. W.; Reinke, M.; Rice, J. E.; Podpaly, Y.
2010-10-15
New x-ray imaging crystal spectrometers, currently operating on Alcator C-Mod, NSTX, EAST, and KSTAR, record spectral lines of highly charged ions, such as Ar{sup 16+}, from multiple sightlines to obtain profiles of ion temperature and of toroidal plasma rotation velocity from Doppler measurements. In the present work, we describe a new data analysis routine, which accounts for the specific geometry of the sightlines of a curved-crystal spectrometer and includes corrections for the Johann error to facilitate the tomographic inversion. Such corrections are important to distinguish velocity induced Doppler shifts from instrumental line shifts caused by the Johann error. The importance of this correction is demonstrated using data from Alcator C-Mod.
TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit
Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan
Kaeli, David R.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction
A simple real-word error detection and correction using local word bigram and trigram
A simple real-word error detection and correction using local word bigram and trigram Pratip bbcisical@gmail.com Abstract Spelling error is broadly classified in two categories namely non word error and real word error. In this paper a localized real word error detection and correction method is proposed
Compiler-Assisted Detection of Transient Memory Errors
Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-06-09
The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.
Estimation of the error for small-sample optimal binary filter design using prior knowledge
Sabbagh, David L
1999-01-01
Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...
Fault tree analysis of commonly occurring medication errors and methods to reduce them
Cherian, Sandhya Mary
1994-01-01
-depth analysis of over two hundred actual medication error incidents. These errors were then classified according to type, in an attempt at deriving a generalized fault tree for the medication delivery system that contributed to errors. This generalized fault...
EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITY OF DIPOLE MAGNETS FOR THE SSC
Meuser, R.B.
2010-01-01
in Fig. 2. Table 2. Manufacturing Error Mode Groups13-16, 1985 EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITYMag Note-27 EFFECT OF MANUFACTURING ERRORS ON FIELO QUALITY
Ford, W.; Marshall, R.S.; Osborn, L.C.; Picard, R.; Thomas, C.C. Jr.
1982-07-01
This report describes the efforts to develop and demonstrate a solution mass measurement system for use at the Los Alamos Plutonium Facility. Because of inaccuracy of load cell measurements, our major effort was directed towards the pneumatic bubbler tube. The differential pressure between the air inlet to the bubbler tube and the glovebox interior is measured and is proportional to the solution mass in the tank. An inexpensive, reliable pressure transducer system for measuring solution mass in vertical, cylindrical tanks was developed, tested, and evaluated in a laboratory test bed. The system can withstand the over- and underpressures resulting from solution transfer operations and can prevent solution backup into the measurement pressure transducer during transfers. Drifts, noise, quantization error, and other effects limit the accuracy to 30 g. A transportable calibration system using a precision machined tank, pneumatic bubbler tubes, and a Ruska DDR 6000 electromanometer was designed, fabricated, tested, and evaluated. Resolution of the system is +-3.5 g out of 50 kg. The calibration error is 5 g, using room-temperature water as the calibrating fluid. Future efforts will be directed towards in-plant test and evaluation of the tank measurement systems. 16 figures, 3 tables.
Economic penalties of problems and errors in solar energy systems
Raman, K.; Sparkes, H.R.
1983-01-01
Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.
V-228: RealPlayer Buffer Overflow and Memory Corruption Error...
Broader source: Energy.gov (indexed) [DOE]
a memory corruption error and execute arbitrary code on the target system. IMPACT: Access control error SOLUTION: vendor recommends upgrading to version 16.0.3.51 Addthis...
Non-Gaussian numerical errors versus mass hierarchy
Y. Meurice; M. B. Oktay
2000-05-12
We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.
Factorization of correspondence and camera error for unconstrained dense correspondence applications
Knoblauch, D; Hess-Flores, M; Duchaineau, M; Kuester, F
2009-09-29
A correspondence and camera error analysis for dense correspondence applications such as structure from motion is introduced. This provides error introspection, opening up the possibility of adaptively and progressively applying more expensive correspondence and camera parameter estimation methods to reduce these errors. The presented algorithm evaluates the given correspondences and camera parameters based on an error generated through simple triangulation. This triangulation is based on the given dense, non-epipolar constraint, correspondences and estimated camera parameters. This provides an error map without requiring any information about the perfect solution or making assumptions about the scene. The resulting error is a combination of correspondence and camera parameter errors. An simple, fast low/high pass filter error factorization is introduced, allowing for the separation of correspondence error and camera error. Further analysis of the resulting error maps is applied to allow efficient iterative improvement of correspondences and cameras.
LHC Network Measurement Joe Metzger
1 LHC Network Measurement Joe Metzger Nov 6 2007 LHCOPN Meeting at CERN Energy Sciences Network & Capacity RRDMA Input Errors & Output Drops PS-SNMPMA Done ?? Beta Aug 1, Package Sep 1 Visualize perf On-demand AMI MA & Scheduler Hades Owamp MP Beta Sep 15, Package Oct 1 October Done Archive perf
Full protection of superconducting qubit systems from coupling errors
M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley
2005-08-09
Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.
When soft controls get slippery: User interfaces and human error
Stubler, W.F.; O`Hara, J.M.
1998-12-01
Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety.
Comment on "Optimum Quantum Error Recovery using Semidefinite Programming"
M. Reimpell; R. F. Werner; K. Audenaert
2006-06-07
In a recent paper ([1]=quant-ph/0606035) it is shown how the optimal recovery operation in an error correction scheme can be considered as a semidefinite program. As a possible future improvement it is noted that still better error correction might be obtained by optimizing the encoding as well. In this note we present the result of such an improvement, specifically for the four-bit correction of an amplitude damping channel considered in [1]. We get a strict improvement for almost all values of the damping parameter. The method (and the computer code) is taken from our earlier study of such correction schemes (quant-ph/0307138).
Error-prevention scheme with two pairs of qubits
Chu, Shih-I; Yang, Chui-Ping; Han, Siyuan
2002-09-04
Ei jue ie j&5ue je i& , e iP$0,1% @6#!. The expressions for HS and HSB are as follows: HS5e0~s I z 1s II z !, *Email address: cpyang@floquet.chem.ku.edu †Email address: sichu@ku.edu ‡ Email address: han@ku.eduError-prevention scheme Chui-Ping Yang.... The sche two pairs of qubits and through error-prevention proc through a decoherence-free subspace for collective p pairs; leakage out of the encoding space due to amp addition, how to construct decoherence-free states for n discussed. DOI: 10.1103/Phys...
Masiello, C. A; Gallagher, M. E; Randerson, J. T; Deco, R. M; Chadwick, O. A
2008-01-01
error on measurements (0.045 C ox units). Accuracy inOR units), with the most accurate measurements resultingmost accurate OR measurement technique (±0.011 OR units). 4.
Processing Quantities with Heavy-Tailed Distribution of Measurement Uncertainty: How
Kreinovich, Vladik
, in the amount of oil in an oil well, etc. In such situations in which we cannot measure y directly, we can often . Thus, under the normality assumption, to gauge the distribution of each measurement error xi, we must def = y - y in y is based on this assumption. Specifically, since the measurement errors xi
Contributions to Human Errors and Breaches in National Security Applications.
Pond, D. J.; Houghton, F. K.; Gilmore, W. E.
2002-01-01
Los Alamos National Laboratory has recognized that security infractions are often the consequence of various types of human errors (e.g., mistakes, lapses, slips) and/or breaches (i.e., deliberate deviations from policies or required procedures with no intention to bring about an adverse security consequence) and therefore has established an error reduction program based in part on the techniques used to mitigate hazard and accident potentials. One cornerstone of this program, definition of the situational and personal factors that increase the likelihood of employee errors and breaches, is detailed here. This information can be used retrospectively (as in accident investigations) to support and guide inquiries into security incidents or prospectively (as in hazard assessments) to guide efforts to reduce the likelihood of error/incident occurrence. Both approaches provide the foundation for targeted interventions to reduce the influence of these factors and for the formation of subsequent 'lessons learned.' Overall security is enhanced not only by reducing the inadvertent releases of classified information but also by reducing the security and safeguards resources devoted to them, thereby allowing these resources to be concentrated on acts of malevolence.
Backward Error and Condition of Polynomial Eigenvalue Problems \\Lambda
Higham, Nicholas J.
, 1999 Abstract We develop normwise backward errors and condition numbers for the polynoÂ mial eigenvalue Research Council grant GR/L76532. 1 #12; where A l 2 C n\\Thetan , l = 0: m and we refer to P as a â??Âmatrix. Few direct numerical methods are available for solving the polynomial eigenvalue problem (PEP). When m
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Sambridge, Malcolm
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library
Rate Regions for Coherent and Noncoherent Multisource Network Error Correction
Ho, Tracey
,tho,effros}@caltech.edu Joerg Kliewer New Mexico State University Email: jkliewer@nmsu.edu Elona Erez Yale University Email a single error on a network link may lead to a corruption of many received packets at the destination nodes
Low Degree Test with Polynomially Small Error Dana Moshkovitz
Moshkovitz, Dana
Low Degree Test with Polynomially Small Error Dana Moshkovitz October 19, 2014 Abstract A long line of work in Theoretical Computer Science shows that a function is close to a low degree polynomial iff it is close to a low degree polynomial locally. This is known as low degree testing
Error Control Based Model Reduction for Parameter Optimization of Elliptic
of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Oldenburg, Douglas W.
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy-UXOs dug per UXO). The discrimination and classification methodology depends on the magnitude of the recov
Improving STT-MRAM Density Through Multibit Error Correction
Sapatnekar, Sachin
. Traditional methods enhance robustness at the cost of area/energy by using larger cell sizes to improve the thermal stability of the MTJ cells. This paper employs multibit error correction with DRAM to the read operation) through TX. A key attribute of an MTJ is the notion of thermal stability. Fig. 2
Designing Automation to Reduce Operator Errors Nancy G. Leveson
Leveson, Nancy
Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of moderelated problems [SW95]. After studying accidents and incidents in the new, highly automated
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error
Richardson, David
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error Detection and Recovery for Robot Motion, and uncertainty in the geometric * This report describes research done at the Artificial Intelligence Laboratory of the Massach- usetts Institute of Technology. Support for the Laboratory's Artificial Intelligence research
Control del Error para la Multirresoluci on Quincunx a la
Amat, Sergio
multirresoluci#19;on discreta no lineal de Harten. En los algoritmos de multirresoluci#19;on se transforma una obtiene ^ f L la cual debera de estar cerca de #22; f L . Por lo tanto, los algoritmos no deben de ser inestables. En este estudio, introduciremos algoritmos de control del error y de la estabilidad. Se obtendr
Error Bounds from Extra Precise Iterative Refinement James Demmel
Li, Xiaoye Sherry
now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5 Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF-0444486; the DOE Grant Nos. DE-FG03
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2005-08-29
of an error-free condition on temperature in single electron logic processors is derived. The size of the quantum dot of single electron transistor is predicted when a single electron logic processor with the a billion single electron transistors works without...
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2004-01-01
-free condition on temperature in single electron logic processors is derived. The size of the quantum dot of a single electron transistor is predicted when a single electron logic processor with the 10? single electron transistors works without error at room...
Urban Water Demand with Periodic Error Correction David R. Bell
Griffin, Ronald
them. Econometric estimates of residential demand for water abound (Dalhuisen et al. 2003Urban Water Demand with Periodic Error Correction by David R. Bell and Ronald C. Griffin February, Department of Agricultural Economics, Texas A&M University. #12;Abstract Monthly demand for publicly supplied
Errors-in-variables problems in transient electromagnetic mineral exploration
Braslavsky, Julio H.
Errors-in-variables problems in transient electromagnetic mineral exploration K. Lau, J. H in transient electromagnetic mineral exploration. A specific sub-problem of interest in this area geological surveys, dia- mond drilling, and airborne mineral exploration. Our interest here is with ground
Energy efficiency of error correction for wireless communication
Havinga, Paul J.M.
-control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
California at Davis, University of
Error Control of Iterative Linear Solvers for Integrated Groundwater Models by Matthew F. Dixon1 for integrated groundwater models, which are implicitly coupled to another model, such as surface water models in legacy groundwater modeling packages, resulting in the overall simulation speedups as large as 7
Estimating the error distribution function in nonparametric regression
Mueller, Uschi
Schick, Wolfgang Wefelmeyer Summary: We construct an efficient estimator for the error distribution estimator, influence function #12;2 M¨uller - Schick - Wefelmeyer M¨uller, Schick and Wefelmeyer (2004a. We refer also to the introduction of M¨uller, Schick and Wefelmeyer (2004b). Our proof is complicat
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications
Polz, Martin
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications Stelios CSAIL, Cambridge, MA, USA Abstract We present Code Phage (CP), a system for automatically transferring. To the best of our knowledge, CP is the first system to automatically transfer code across multiple
Development of an Expert System for Classification of Medical Errors
Kopec, Danny
in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors
Selected CRC Polynomials Can Correct Errors and Thus Reduce Retransmission
Mache, Jens
sensor networks, minimizing communication is crucial to improve energy consumption and thus lifetime Correction, Reliability, Network Protocol, Low Power Comsumption I. INTRODUCTION Error detection using Cyclic of retransmitting the whole packet - improves energy consumption and thus lifetime of wireless sensor networks
A Spline Algorithm for Modeling Cutting Errors Turning Centers
Gilsinn, David E.
. Bandy Automated Production Technology Division National Institute of Standards and Technology 100 Bureau are made up of features with profiles defined by arcs and lines. An error model for turned parts must take. In the case where there is a requirement of tangency between two features, such as a line tangent to an arc
SU-E-T-374: Sensitivity of ArcCHECK to Tomotherapy Delivery Errors: Dependence On Analysis Technique
Templeton, A; Chu, J; Turian, J
2014-06-01
Purpose: ArcCHECK (Sun Nuclear) is a cylindrical diode array detector allowing three-dimensional sampling of dose, particularly useful in treatment delivery QA of helical tomotherapy. Gamma passing rate is a common method of analyzing results from diode arrays, but is less intuitive in 3D with complex measured dose distributions. This study explores the sensitivity of gamma passing rate to choice of analysis technique in the context of its ability to detect errors introduced into the treatment delivery. Methods: Nine treatment plans were altered to introduce errors in: couch speed, gantry/sonogram synchronization, and leaf open time. Each plan was then delivered to ArcCHECK in each of the following arrangements: “offset,” when the high dose area of the plan is delivered to the side of the phantom so that some diode measurements will be on the order of the prescription dose, and “centered,” when the high dose is in the center of the phantom where an ion chamber measurement may be acquired, but the diode measurements are in the mid to low-dose region at the periphery of the plan. Gamma analysis was performed at 3%/3mm tolerance and both global and local gamma criteria. The threshold of detectability for each error type was calculated as the magnitude at which the gamma passing rate drops below 90%. Results: Global gamma criteria reduced the sensitivity in the offset arrangement (from 2.3% to 4.5%, 8° to 21°, and 3ms to 8ms for couch-speed decrease, gantry-error, and leaf-opening increase, respectively). The centered arrangement detected changes at 3.3%, 5°, and 4ms with smaller variation. Conclusion: Each arrangement has advantages; offsetting allows more sampling of the higher dose region, while centering allows an ion chamber measurement and potentially better use of tools such as 3DVH, at the cost of positioning more of the diodes in the sometimes noisy mid-dose region.
Temperature Measurements in the Magnetic Measurement Facility
Wolf, Zachary
2010-12-13
Several key LCLS undulator parameter values depend strongly on temperature primarily because of the permanent magnet material the undulators are constructed with. The undulators will be tuned to have specific parameter values in the Magnetic Measurement Facility (MMF). Consequently, it is necessary for the temperature of the MMF to remain fairly constant. Requirements on undulator temperature have been established. When in use, the undulator temperature will be in the range 20.0 {+-} 0.2 C. In the MMF, the undulator tuning will be done at 20.0 {+-} 0.1 C. For special studies, the MMF temperature set point can be changed to a value between 18 C and 23 C with stability of {+-}0.1 C. In order to ensure that the MMF temperature requirements are met, the MMF must have a system to measure temperatures. The accuracy of the MMF temperature measurement system must be better than the {+-}0.1 C undulator tuning temperature tolerance, and is taken to be {+-}0.01 C. The temperature measurement system for the MMF is under construction. It is similar to a prototype system we built two years ago in the Sector 10 alignment lab at SLAC. At that time, our goal was to measure the lab temperature to {+-}0.1 C. The system has worked well for two years and has maintained its accuracy. For the MMF system, we propose better sensors and a more extensive calibration program to achieve the factor of 10 increase in accuracy. In this note we describe the measurement system under construction. We motivate our choice of system components and give an overview of the system. Most of the software for the system has been written and will be discussed. We discuss error sources in temperature measurements and show how these errors have been dealt with. The calibration system is described in detail. All the LCLS undulators must be tuned in the Magnetic Measurement Facility at the same temperature to within {+-}0.1 C. In order to ensure this, we are building a system to measure the temperature of the undulators to {+-}0.01 C. This note describes the temperature measurement system under construction.
Tracking granules at the Sun's surface and reconstructing velocity fields. II. Error analysis
R. Tkaczuk; M. Rieutord; N. Meunier; T. Roudier
2007-07-13
The determination of horizontal velocity fields at the solar surface is crucial to understanding the dynamics and magnetism of the convection zone of the sun. These measurements can be done by tracking granules. Tracking granules from ground-based observations, however, suffers from the Earth's atmospheric turbulence, which induces image distortion. The focus of this paper is to evaluate the influence of this noise on the maps of velocity fields. We use the coherent structure tracking algorithm developed recently and apply it to two independent series of images that contain the same solar signal. We first show that a k-\\omega filtering of the times series of images is highly recommended as a pre-processing to decrease the noise, while, in contrast, using destretching should be avoided. We also demonstrate that the lifetime of granules has a strong influence on the error bars of velocities and that a threshold on the lifetime should be imposed to minimize errors. Finally, although solar flow patterns are easily recognizable and image quality is very good, it turns out that a time sampling of two images every 21 s is not frequent enough, since image distortion still pollutes velocity fields at a 30% level on the 2500 km scale, i.e. the scale on which granules start to behave like passive scalars. The coherent structure tracking algorithm is a useful tool for noise control on the measurement of surface horizontal solar velocity fields when at least two independent series are available.
Direct tests of measurement uncertainty relations: what it takes
Paul Busch; Neil Stevens
2015-01-17
The uncertainty principle being a cornerstone of quantum mechanics, it is surprising that in nearly 90 years there have been no direct tests of measurement uncertainty relations. This lacuna was due to the absence of two essential ingredients: appropriate measures of measurement error (and disturbance), and precise formulations of such relations that are {\\em universally valid}and {\\em directly testable}. We formulate two distinct forms of direct tests, based on different measures of error. We present a prototype protocol for a direct test of measurement uncertainty relations in terms of {\\em value deviation errors} (hitherto considered nonfeasible), highlighting the lack of universality of these relations. This shows that the formulation of universal, directly testable measurement uncertainty relations for {\\em state-dependent} error measures remains an important open problem. Recent experiments that were claimed to constitute invalidations of Heisenberg's error-disturbance relation are shown to conform with the spirit of Heisenberg's principle if interpreted as direct tests of measurement uncertainty relations for error measures that quantify {\\em distances between observables}.
Measurement to Error Stability: a Notion of Partial Detectability for Nonlinear Systems
Sontag, Eduardo
- research completed while the author was at Rutgers University, supported in part by US Air Force Grant F49620-98-1-0242 Supported in part by US Air Force Grant F49620-01- 1-0063 Supported in part by NSF Grant properly de- scribed as zero-detectability). The paper [23] contains a discussion of various definitions
Ultrasonic thickness measurements on corroded steel members: a statistical analysis of error
Konen, Keith Forman
1999-01-01
Service of the Department of the Interior, Shell Deepwater Development and Mobil Technology Company. The second phase of that project will be to identify any common patterns of corrosion, which may be useful in developing a protocol for in-situ testing...
Automated suppression of errors in LTP-II slope measurements with x-ray optics
Ali, Zulfiqar
2012-01-01
precise reflective X-ray optics,” Nucl. Instrum. and Methods70 (2001). [2] P. Z. Takacs, “X- ray optics metrology,” in [Handbook of Optics], 3rd ed. , Vol. V, M. Bass, Ed. ,
Estimation of the linear-plateau segmented regression model in the presence of measurement error
Grimshaw, Scott D.
1985-01-01
u continuous density f , with E(x. ) & , ei - N(0, c ), and x i ' i e with x, , e. , u. . independent. 1 1 13 Y = [ Yl Y2 Y ]' (3. 2) 12 Zl Z2 Z a0 al ]' [ el e2 en] ' [ w) w2 wn' Z. =X. 1 1. if X &Y (3. 3) and if X. ) Y , i=if /n 1 m 3...-Y) III( ) ) ~m ? [E(w. . )] 2 i3 (iv) E(w, . ) = 3a P(X, ( y) 4 4 13 U 1 ~ + J (t-v) ~( ) f (t) ? Y m x + f (t-v) o(-v ) ?? + ? f (t-Y) 4(& ) f?(t) dt ~m f? (t-Y) 4( ) t?(t) dt ?m + 6~ f (t-v) @(? ) t (t) dt U Y m x + 3(2m-1)( ) f (t-v) 4...
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration would like submitKansas Nuclear Profile 2010MesoscopyStaff Â»VehicleEffectiveEffects of Spectral
On the Fourier Transform Approach to Quantum Error Control
Hari Dilip Kumar
2012-08-24
Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require (p) analysis time per MPI operation,more »forpprocesses. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)
2010-08-24
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
BEAM RELATED SYSTEMATICS IN HIGGS BOSON MASS MEASUREMENT
BEAM RELATED SYSTEMATICS IN HIGGS BOSON MASS MEASUREMENT A.RASPEREZA DESY, Notkestrasse 85, DÂ22607#erential luminosity spectrum measurements and beam energy spread on the precision of the Higgs boson mass measurement possible impact of the beam related systematic errors on the Higgs boson mass measurement is discussed
On the efficiency of nondegenerate quantum error correction codes for Pauli channels
Gunnar Bjork; Jonas Almlof; Isabel Sainz
2009-05-19
We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.
From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring
Lombardi, Marcie L.
2012-03-01
Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at the Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.
Vrettos, N; Athneal Marzolf, A; Casandra Robinson, C; James Fiscus, J; Daniel Krementz, D; Thomas Nance, T
2007-11-26
The process sumps in H-Canyon at the Savannah River Site (SRS) collect leaks from process tanks and jumpers. To prevent build-up of fissile material the sumps are frequently flushed which generates liquid waste and is prone to human error. The development of inserts filled with a neutron poison will allow a reduction in the frequency of flushing. Due to concrete deterioration and deformation of the sump liners the current dimensions of the sumps are unknown. Knowledge of these dimensions is necessary for development of the inserts. To solve this problem a remote Sump Measurement System was designed, fabricated, and tested to aid development of the sump inserts.
Errors, Correlations and Fidelity for noisy Hamilton flows. Theory and numerical examples
Giorgio Turchetti; Federico Panichi; Stefano Sinigardi; Sandro Vaienti
2015-09-25
We compare the decay of correlations and fidelity for some prototype noisy Hamiltonian flows on a compact phase space. The results obtained for maps on the torus $\\mathbb{T}^2$ or on the cylinder $\\mathbb{T} \\times{\\Bbb I}$ are recovered, in a simpler way, in the limit of vanishing time step $\\Delta t \\to 0$, if these maps are the symplectic integrators of the proposed flows. The mean square deviation $\\sigma(t)$ of the noisy flow asymptotically diverges, following a power law if the unperturbed flow is integrable, exponentially if it is chaotic. Correspondingly the fidelity, which measures the correlation at time $t$ of the noisy flow with respect to unperturbed flow, decays as $\\exp\\bigl (-2 \\pi^2\\, \\sigma^2(t)\\bigr)$. For chaotic flows the fidelity exhibits a plateau, followed by a super-exponential decay starting at $t_* \\propto -\\log \\epsilon$ where $\\epsilon$ is the noise amplitude. We analyze numerically two simple models: the anharmonic oscillator the H\\'enon-Heiles Hamiltonian and the N vortex system for $ N=3,4$ The round-off error on the symplectic integrator acts as a (single realization of a) random perturbation provided that the map has a sufficiently high computational complexity. This can be checked by considering the reversibility error. Finally we consider the effect of the observational noise showing that the decrease of correlations or fidelity can only be observed, after a sequence of measurements. The multiplicative noise is more effective at least for long enough delay between two measurements.
Absolute beam emittance measurements at RHIC using ionization profile monitors
Minty, M.; Connolly, R; Liu, C.; Summers, T.; Tepikian, S.
2014-08-15
In the past, comparisons between emittance measurements obtained using ionization profile monitors, Vernier scans (using as input the measured rates from the zero degree counters, or ZDCs), the polarimeters and the Schottky detectors evidenced significant variations of up to 100%. In this report we present studies of the RHIC ionization profile monitors (IPMs). After identifying and correcting for two systematic instrumental errors in the beam size measurements, we present experimental results showing that the remaining dominant error in beam emittance measurements at RHIC using the IPMs was imprecise knowledge of the local beta functions. After removal of the systematic errors and implementation of measured beta functions, precise emittance measurements result. Also, consistency between the emittances measured by the IPMs and those derived from the ZDCs was demonstrated.
A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM
Nochetto, Ricardo H.
A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM casting problem is a convectiondominated nonlinearly degenerate diffusion problem. It is discretized adaptive method. Keywords. a posteriori error estimates, continuous casting, method of characteristics
Simulations of error in quantum adiabatic computations of random 2-SAT instances
Gill, Jay S. (Jay Singh)
2006-01-01
This thesis presents a series of simulations of quantum computations using the adiabatic algorithm. The goal is to explore the effect of error, using a perturbative approach that models 1-local errors to the Hamiltonian ...
Design techniques for graph-based error-correcting codes and their applications
Lan, Ching Fu
2006-04-12
-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...
Shared dosimetry error in epidemiological dose-response analyses
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.
Shared dosimetry error in epidemiological dose-response analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
Error-field penetration in reversed magnetic shear configurations
Wang, H. H.; Wang, Z. X.; Wang, X. Q. [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China)] [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, X. G. [School of Physics, Peking University, Beijing 100871 (China)] [School of Physics, Peking University, Beijing 100871 (China)
2013-06-15
Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.
Error 401 on upload? | OpenEI Community
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on QA:QA J-E-1 SECTION J APPENDIX ECoopButtePowerEdisto Electric Coop,Erosion Flume Jump to: navigation, search BasicError
Plasma parameter scaling of the error-field penetration threshold in tokamaks Richard Fitzpatrick
Fitzpatrick, Richard
Plasma parameter scaling of the error-field penetration threshold in tokamaks Richard Fitzpatrick of a rotating tokamak plasma to a resonant error-field Phys. Plasmas 21, 092513 (2014); 10.1063/1.4896244 A nonideal error-field response model for strongly shaped tokamak plasmas Phys. Plasmas 17, 112502 (2010); 10
Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS
Duckham, Matt
Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS Matt Duckham Department in the handling of uncertainty within GIS, the production of what has been described as an error sensitive GIS of opportunities, but also impediments to the implemen- tation of such an error sensitive GIS. An important barrier
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning Robert Granat, Kiri-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors for detecting and recovering from such errors. A common hardware technique for achieving radiation protection
Edit: Study -APP Save | Exit | Hide/Show Errors | Print... | Jump To
Biederman, Irving
Edit: Study - APP Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project Guidance Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project IdentificationStarDev/ResourceAdministration/Project/ProjectEditor?Project=com... 1 #12;Edit: Study - APP- Save | Exit | Hide/Show Errors | Print... | Jump To: 02. Study
Error Correction on a Tree: An Instanton Approach V. Chernyak,1
Stepanov, Misha
or semianalytical estimating of the post-error correction bit error rate (BER) when a forward-error correction 630090, Russia 5 Department of Electrical Engineering, University of Arizona, Tucson, Arizona 85721, USA is utilized for transmitting information through a noisy channel. The generic method that applies to a variety
Shota Kino; Taiki Nii; Holger F. Hofmann
2015-08-13
Joint measurements of non-commuting observables are characterized by unavoidable measurement uncertainties that can be described in terms of the error statistics for input states with well-defined values for the target observables. However, a complete characterization of measurement errors must include the correlations between the errors of the two observables. Here, we show that these correlations appear in the experimentally observable measurement statistics obtained by performing the joint measurement on maximally entangled pairs. For two-level systems, the results indicate that quantum theory requires imaginary correlations between the measurement errors of X and Y since these correlations are represented by the operator product XY=iZ in the measurement operators. Our analysis thus reveals a directly observable consequence of non-commutativity in the statistics of quantum measurements.
New insights on numerical error in symplectic integration
Hugo Jiménez-Pérez; Jean-Pierre Vilotte; Barbara Romanowicz
2015-08-13
We implement and investigate the numerical properties of a new family of integrators connecting both variants of the symplectic Euler schemes, and including an alternative to the classical symplectic mid-point scheme, with some additional terms. This family is derived from a new method, introduced in a previous study, for generating symplectic integrators based on the concept of special symplectic manifold. The use of symplectic rotations and a particular type of projection keeps the whole procedure within the symplectic framework. We show that it is possible to define a set of parameters that control the additional terms providing a way of "tuning" these new symplectic schemes. We test the "tuned" symplectic integrators with the perturbed pendulum and we compare its behavior with an explicit scheme for perturbed systems. Remarkably, for the given examples, the error in the energy integral can be reduced considerably. There is a natural geometrical explanation, sketched at the end of this paper. This is the subject of a parallel article where a finer analysis is performed. Numerical results obtained in this paper open a new point of view on symplectic integrators and Hamiltonian error.
Aperiodic dynamical decoupling sequences in presence of pulse errors
Zhi-Hui Wang; V. V. Dobrovitski
2011-01-12
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstrate that by applying the control pulses along different directions, the performance of QDD protocols can be noticeably improved, and explain the reason of such an improvement. Our results can be useful for future implementations of the aperiodic decoupling protocols, and for better understanding of the impact of errors on quantum control of spins.
The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors
Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)
2010-07-15
Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.
Pollutant measurements Nils Mole, Finn Palmgren & Hao Zhang
Mole, Nils
we deal with measurement techniques and strategies appropriate to major pollutants in both air and water, and also with the effects of unavoidable measurement errors. Pollutant Measurements in Air The atmosphere is an important medium for trans- port and transformation of pollutants. Air pollutants can
QAM Adaptive Measurements Feedback Quantum Receiver Performance
Tian Chen; Ke Li; Yuan Zuo; Bing Zhu
2015-04-11
We theoretically study the quantum receivers with adaptive measurements feedback for discriminating quadrature amplitude modulation (QAM) coherent states in terms of average symbol error rate. For rectangular 16-QAM signal set, with different stages of adaptive measurements, the effects of realistic imperfection parameters including the sub-unity quantum efficiency and the dark counts of on-off detectors, as well as the transmittance of beam splitters and the mode mismatch factor between the signal and local oscillating fields on the symbol error rate are separately investigated through Monte Carlo simulations. Using photon-number-resolving detectors (PNRD) instead of on-off detectors, all the effects on the symbol error rate due to the above four imperfections can be suppressed in a certain degree. The finite resolution and PNR capability of PNRDs are also considered. We find that for currently available technology, the receiver shows a reasonable gain from the standard quantum limit (SQL) with moderate stages.
Mathur, Anuj
1994-01-01
In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...
Fossen, Haakon
Errors, 3rd printing ·Page 3, Fig 1.2 has an error in the stratigraphic key: "Tertiary" should "-amplitude" to "-wavelength". ·Page 231, 6th and 3rd last lines of the page: Add "Figure" in front of 19.5a ..." and 3rd line: "three principal axes" (not two). #12;
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Error field penetration and locking to the backward propagating wave
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore »pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
A two-dimensional matrix correction for off-axis portal dose prediction errors
Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.
2013-05-15
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in the 1D correction case, the 2D algorithm leaves the portal dosimetry process virtually unchanged in the central portion of the detector, and thus these correction algorithms are not needed for centrally located fields of moderate size (at least, in the case of 6 MV beam energy).Conclusion: The 2D correction improves the portal dosimetry results for those fields for which the 1D correction proves insufficient, especially in the inplane, off-axis regions of the detector. This 2D correction neglects the relatively smaller discrepancies that may be caused by backscatter from nonuniform machine components downstream from the detecting layer.
Nakatani, T.; Agui, A.; Yoshigoe, A.; Matsushita, T.; Takao, M.; Aoyagi, H.; Takeuchi, M.; Tanaka, H.
2004-05-12
We have developed a new method to extract only the orbit fluctuation caused by changing magnetic field error of an insertion device (ID). This method consists of two main parts. (i) The orbit fluctuation is measured with modulating the error field of the ID by using the real-time beam position measuring system. (ii) The orbit fluctuation depending on the variation of the error field of the ID is extracted by the filter applying the Wavelet Transform. We call this approach the amplitude modulation method. This analysis technique was applied to measure the orbit fluctuation caused by the error field of APPLE-2 type undulator (ID23) installed in the SPring-8 storage ring. We quantitatively measured two kinds of the orbit fluctuation which are the static term caused by the magnetic field error and the dynamic term caused by the eddy current on the ID23 chamber.
Atmospheric Dispersion Effects in Weak Lensing Measurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore »statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less
Atmospheric Dispersion Effects in Weak Lensing Measurements
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed the statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Hinckley, C.M.
1994-01-01
The performance of Japanese products in the marketplace points to the dominant role of quality in product competition. Our focus is motivated by the tremendous pressure to improve conformance quality by reducing defects to previously unimaginable limits in the range of 1 to 10 parts per million. Toward this end, we have developed a new model of conformance quality that addresses each of the three principle defect sources: (1) Variation, (2) Human Error, and (3) Complexity. Although the role of variation in conformance quality is well documented, errors occur so infrequently that their significance is not well known. We have shown that statistical methods are not useful in characterizing and controlling errors, the most common source of defects. Excessive complexity is also a root source of defects, since it increases errors and variation defects. A missing link in the defining a global model has been the lack of a sound correlation between complexity and defects. We have used Design for Assembly (DFA) methods to quantify assembly complexity and have shown that assembly times can be described in terms of the Pareto distribution in a clear exception to the Central Limit Theorem. Within individual companies we have found defects to be highly correlated with DFA measures of complexity in broad studies covering tens of millions of assembly operations. Applying the global concepts, we predicted that Motorola`s Six Sigma method would only reduce defects by roughly a factor of two rather than orders of magnitude, a prediction confirmed by Motorola`s data. We have also shown that the potential defects rates of product concepts can be compared in the earliest stages of development. The global Conformance Quality Model has demonstrated that the best strategy for improvement depends upon the quality control strengths and weaknesses.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore »a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.
Contagious error sources would need time travel to prevent quantum computation
Gil Kalai; Greg Kuperberg
2015-05-07
We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.
Ice thickness measurements by Raman scattering
Pershin, Sergey M; Klinkov, Vladimir K; Yulmetov, Renat N; Bunkin, Alexey F
2014-01-01
A compact Raman LIDAR system with a spectrograph was used for express ice thickness measurements. The difference between the Raman spectra of ice and liquid water is employed to locate the ice-water interface while elastic scattering was used for air-ice surface detection. This approach yields an error of only 2 mm for an 80-mm-thick ice sample, indicating that it is promising express noncontact thickness measurements technique in field experiments.
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)
1993-01-01
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Plasma dynamics and a significant error of macroscopic averaging
Marek A. Szalek
2005-05-22
The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.
Aperiodic dynamical decoupling sequences in presence of pulse errors
Wang, Zhi-Hui
2011-01-01
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstra...
The Importance of Run-time Error Detection Glenn R. Luecke 1
Luecke, Glenn R.
Iowa State University's High Performance Computing Group, Iowa State University, Ames, Iowa 50011, USA State University's High Performance Computing Group for evaluating run-time error detection capabilities
A Key Recovery Attack on Error Correcting Code Based a Lightweight Security Protocol
International Association for Cryptologic Research (IACR)
become prevalent in various fields. Manufacturing, supply chain management and inventory control are some--Authentication, error correcting coding, lightweight, privacy, RFID, security ! 1 INTRODUCTION RFID technology has
Ulidowski, Irek
Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip. Wire markers are typically attached to the polyethylene acetabular component of the prosthesis so
Choose and choose again: appearance-reality errors, pragmatics and logical ability
Deák, Gedeon O; Enright, Brian
2006-01-01
Development, 62, 753–766. Speer, J.R. (1984). Two practicalolder still make errors (e.g. Speer, 1984), some preschool
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-01-01
Development, 62, 753-766. Speer, J. R. (1984). Two practicalolder still make errors (e.g. , Speer, 1984), some preschool
Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley National Laboratory
George, Jeffrey S.
2008-01-01
Quasi-Monoenergetic Neutron Beam from Deuteron Breakup”, inexperiments of atmospheric neutron effects on deep sub-Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley
Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes
Kanungo, Aparna
2005-11-01
an expression to compute the gate error threshold for nonbinary quantum codes and test this result for different classes of codes, to get codes with best threshold results....
Unambiguous discrimination of extremely similar states by a weak measurement
Chang Qiao; Shengjun Wu; Zeng-Bing Chen
2013-02-25
In this paper, we propose a method to discriminate two extremely similar quantum states via a weak measurement. For the two states with equal prior probabilities, the optimum discrimination probability given by Ivanovic-Dieks-Peres limit can be achieved by our protocol with an appropriate choice of the interaction strength. However, compared with the conventional method for state discrimination, our approach shows the advantage of error-tolerance by achieving a better ratio of the success probability to the probability of error.
Mapping GPS positional errors with spatial linear mixed models
Militino, A. F.
Nowadays, GPS receivers are very reliable because of their good accuracy and precision; however, uncertainty is also inherent in geospatial data. Quality of GPS measurements can be influenced by atmospheric disturbances, ...
A LIDAR-based crop height measurement system for Miscanthus giganteus Lei Zhang, Tony E. Grift
G stem densities. The results showed an average error of 5.08% with a maximum error of 8% and a minimum of bioenergy crop performance. Field crops such as corn and soybean are harvested for their seeds, and various flow measurements. However, in the case of bioenergy crops, the complete above ground plant
Adam Miranowicz; Sahin K. Ozdemir; Jiri Bajer; Go Yusa; Nobuyuki Imoto; Yoshiro Hirayama; Franco Nori
2014-10-09
We discuss methods of quantum state tomography for solid-state systems with a large nuclear spin $I=3/2$ in nanometer-scale semiconductors devices based on a quantum well. Due to quadrupolar interactions, the Zeeman levels of these nuclear-spin devices become nonequidistant, forming a controllable four-level quantum system (known as quartit or ququart). The occupation of these levels can be selectively and coherently manipulated by multiphoton transitions using the techniques of nuclear magnetic resonance (NMR) [Yusa et al., Nature (London) 434, 101 (2005)]. These methods are based on an unconventional approach to NMR, where the longitudinal magnetization $M_z$ is directly measured. This is in contrast to the standard NMR experiments and tomographic methods, where the transverse magnetization $M_{xy}$ is detected. The robustness against errors in the measured data is analyzed by using condition numbers. We propose several methods with optimized sets of rotations. The optimization is applied to decrease the number of NMR readouts and to improve the robustness against errors, as quantified by condition numbers. An example of state reconstruction, using Monte Carlo methods, is presented. Tomographic methods for quadrupolar nuclei with higher-spin numbers (including $I=7/2$) are also described.
A Direct Measure of Entrainment DAVID M. ROMPS
Romps, David M.
A Direct Measure of Entrainment DAVID M. ROMPS Harvard University, Cambridge, Massachusetts for directly measuring convective entrainment and detrainment in a cloud- resolving simulation. This technique is used to quantify the errors in the entrainment and detrainment esti- mates obtained using the standard
A nonideal error-field response model for strongly shaped tokamak plasmas R. Fitzpatrick
Fitzpatrick, Richard
A nonideal error-field response model for strongly shaped tokamak plasmas R. Fitzpatrick Citation of a rotating tokamak plasma to a resonant error-field Phys. Plasmas 21, 092513 (2014); 10.1063/1.4896244 Kinetic description of rotating Tokamak plasmas with anisotropic temperatures in the collisionless regime
Upper Bounds on ErrorCorrecting RunlengthLimited Block Codes
Ytrehus, Ã?yvind
. Inf. Th. May 1991, pp. 941--945 Abstract --- Upper bounds are derived on the number of codewordsÂlimited codes, errorÂcorrection. This work was supported by the Norwegian Research Council for Science on the size of (d; k)Â constrained, simpleÂerror correcting block codes. There are two directions in which one
Finite Element Approximation of the Acoustic Wave Equation: Error Control and Mesh
Bangerth, Wolfgang
Finite Element Approximation of the Acoustic Wave Equation: Error Control and Mesh Adaptation of the Acoustic Wave Equation: Error Control and Mesh Adaptation Wolfgang Bangerth and Rolf Rannacher1 Institute@iwr.uni-heidelberg.de Abstract We present an approach to solving the acoustic wave equation by adaptive finite el- ement methods
Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow
Shadday, Martin A., Jr.
1997-03-20
The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated.
Low-voltage, low-power, low switching error, class-AB switched current
Serdijn, Wouter A.
Low-voltage, low-power, low switching error, class-AB switched current memory cell C. Sawigun and W into two components by a low-voltage class-AB current splitter and subsequently processes the individual signals by two low switching error class-A memory cells. As a conse- quence, the output current obtained
Using system simulation to model the impact of human error in a maritime system
van Dorp, Johan René
the modeling of human error related accident event sequences in a risk assessment of maritime oil framwork was developed for the Prince William Sound Risk Assessment based on interviews with maritime William Sound; Human error; Maritime accidents; Expert judgement; Risk assessment; Risk management 1
Convergence Analysis of the LMS Algorithm with a General Error Nonlinearity and an IID Input
Al-Naffouri, Tareq Y.
Convergence Analysis of the LMS Algorithm with a General Error Nonlinearity and an IID Input Tareq. of Electrical Eng. Abstract The class of least mean square (LMS) algorithms employing a general error are entirely consis- tent with those of the LMS algorithm and several of its variants. The results also
Al-Naffouri, Tareq Y.
The Optimum Error Nonlinearity in LMS Adaptation with an Independent and Identically Distributed, CA 94305 Dhahran 31261 USA Saudi Arabia Abstract The class of LMS algorithms employing a gen- eral view of error nonlinearities in LMS adaptation. In particular, it subsumes two recently developed
Outage Probability for Free-Space Optical Systems Over Slow Fading Channels With Pointing Errors
Hranilovic, Steve
Outage Probability for Free-Space Optical Systems Over Slow Fading Channels With Pointing Errors, Canada. Email: farid@grads.ece.mcmaster.ca, hranilovic@mcmaster.ca Abstract-- We investigate the outage errors. An expression for the outage probability is derived and we show that optimizing the transmit- ted
Object calculus and the object-oriented analysis and design of an error-sensitive GIS
Duckham, Matt
Object calculus and the object-oriented analysis and design of an error-sensitive GIS MATT DUCKHAM of an error-sensitive GIS Abstract. The use of object-oriented analysis and design (OOAD) in GIS research of the key contemporary issues in GIS. This paper examines the application of one particular OO formalism
State preservation by repetitive error detection in a superconducting quantum circuit J. Kelly,1,
Martinis, John M.
State preservation by repetitive error detection in a superconducting quantum circuit J. Kelly,1 , and superconducting circuits1113 have demonstrated multi-qubit states that are first-order toler- ant to one type of error. Recently, experiments with ion traps and superconducting circuits have shown the simultaneous de
Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion
He, Lei
, power and perfor- mance. Recent logic re-synthesis techniques, such as ROSE [2], IPR [3], IPD [4] and R2Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion Naifeng Jing1 , Ju-Yueh Lee2 the Soft Error Rate (SER) at chip level, and reveal a locality and NP-Hardness of the IPV problem. We
Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion
He, Lei
but with high overhead in area, power and performance. Recent logic re-synthesis techniques, such as ROSE [2Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion Naifeng Jing1 , Ju-Yueh Lee2 the Soft Error Rate (SER) at chip level, and reveal a locality and NP-Hardness of the IPV problem. We
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing Systems
Pedram, Massoud
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing has drastically increased their susceptibility to soft errors. At the grand scale of cloud computing outputs or system crash. At the grand scale of cloud computing, this problem can only worsen [2, 3, 4, 5
PII S00167037(99)00204-5 A test for systematic errors in 40
Min, Kyoungwon
dating arise from uncertainties in the 40 K decay constants and K/Ar isotopic data for neutron fluence monitors (standards). The activity data underlying the decay constants used in geochronology since 1977). These studies have shown that system- atic errors outweigh typical analytical errors by at least one order
TYPOGRAPHICAL AND ORTHOGRAPHICAL SPELLING ERROR Kyongho Min*, William H. Wilson*, Yoo-Jin Moon
Wilson, Bill
-Jin Moon *School of Computer Science and Engineering The University of New South Wales Sydney NSW 2052 of spelling errors such as typographical (Damerau, 1964; Pollock and Zamora, 1983), orthographical (Sterling), and orthographical errors in spontaneous writings of children (Sterling, 1983; Mitton, 1987). 1.2. Approaches
A Case for Soft Error Detection and Correction in Computational Chemistry
van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
A Non-Stationary Errors-in-Variables Method with Application to Mineral Exploration
Braslavsky, Julio H.
A Non-Stationary Errors-in-Variables Method with Application to Mineral Exploration K. Lau 1 J. H-cancellation in transient electromagnetic mineral exploration. Alternative methods for noise cancellation in these systems for this class of systems is proposed and applied to a problem arising in mineral exploration. An errors
Presenting JECA: A Java Error Correcting Algorithm for the Java Intelligent Tutoring System
Franek, Frantisek
Presenting JECA: A Java Error Correcting Algorithm for the Java Intelligent Tutoring System Edward context involving small Java programs. Furthermore, this paper presents JECA (Java Error Correction is to provide a foundation for the Java Intelligent Tutoring System (JITS) currently being field-tested. Key
A POSTERIORI ERROR ANALYSIS OF THE LINKED INTERPOLATION TECHNIQUE FOR PLATE BENDING PROBLEMS
Lovadina, Carlo
A POSTERIORI ERROR ANALYSIS OF THE LINKED INTERPOLATION TECHNIQUE FOR PLATE BENDING PROBLEMS CARLO Interpolation Tech- nique' to approximate the solution of plate bending problems. We show that the proposed. 1. Introduction. In this paper we present an a posteriori error analysis for the so-called `Linked
Integrated Control-Path Design and Error Recovery in the Synthesis of Digital
Chakrabarty, Krishnendu
11 Integrated Control-Path Design and Error Recovery in the Synthesis of Digital Microfluidic Lab that incorporates control paths and an error- recovery mechanism in the design of a digital microfluidic lab, compared to a baseline chip design, the biochip with a control path can reduce the completion time by 30
ERROR BOUNDS FOR MONOTONE APPROXIMATION SCHEMES FOR HAMILTON-JACOBI-BELLMAN
ERROR BOUNDS FOR MONOTONE APPROXIMATION SCHEMES FOR HAMILTON-JACOBI-BELLMAN EQUATIONS GUY BARLES AND ESPEN R. JAKOBSEN Abstract. We obtain error bounds for monotone approximation schemes of Hamilton-Jacobi, (almost) smooth supersolutions for the Hamilton-Jacobi-Bellman equation. 1. Introduction This paper
AN ADAPTIVE METHOD WITH RIGOROUS ERROR CONTROL FOR THE HAMILTON-JACOBI EQUATIONS.
AN ADAPTIVE METHOD WITH RIGOROUS ERROR CONTROL FOR THE HAMILTON-JACOBI EQUATIONS. PART II: THE TWO adaptive method with rigorous error control for the Hamilton-Jacobi equations. Part II: The two and study an adaptive method for finding approximations to the viscosity solution of Hamilton-Jacobi
PROBABILITY OF ERROR FOR TRAINED UNITARY SPACE-TIME MODULATION OVER A
Swindlehurst, A. Lee
PROBABILITY OF ERROR FOR TRAINED UNITARY SPACE-TIME MODULATION OVER A GAUSS-INNOVATIONS RICIAN probability of error for trained uni- tary space-time modulation over channels with a constant specular trained modulation, assuming that the channel is constant between training periods. All of the above
Characterization of the Impact of Indoor Doppler Errors on Pedestrian Dead Reckoning
Calgary, University of
Characterization of the Impact of Indoor Doppler Errors on Pedestrian Dead Reckoning Valérie, University of Calgary 2500 University Drive NW Calgary, Alberta, Canada, T2N 1N4 Abstract--Indoor pedestrian on a Pedestrian Dead Reckoning (PDR) navigation filter is investigated. Doppler errors are simulated using
IEEE SENSORS JOURNAL, VOL. 3, NO. 5, OCTOBER 2003 595 Active Structural Error Suppression in MEMS
Chen, Zhongping
-run perturbations are presented. Index Terms--Error suppression, microelectromechanical sys- tems (MEMS), rate integrating gyroscopes, smart MEMS. I. INTRODUCTION AS MICROELECTROMECHANICAL systems (MEMS) inertial sensorsIEEE SENSORS JOURNAL, VOL. 3, NO. 5, OCTOBER 2003 595 Active Structural Error Suppression in MEMS
Almasi, Gheorghe (Ardsley, NY) [Ardsley, NY; Blumrich, Matthias Augustin (Ridgefield, CT) [Ridgefield, CT; Chen, Dong (Croton-On-Hudson, NY) [Croton-On-Hudson, NY; Coteus, Paul (Yorktown, NY) [Yorktown, NY; Gara, Alan (Mount Kisco, NY) [Mount Kisco, NY; Giampapa, Mark E. (Irvington, NY) [Irvington, NY; Heidelberger, Philip (Cortlandt Manor, NY) [Cortlandt Manor, NY; Hoenicke, Dirk I. (Ossining, NY) [Ossining, NY; Singh, Sarabjeet (Mississauga, CA) [Mississauga, CA; Steinmacher-Burow, Burkhard D. (Wernau, DE) [Wernau, DE; Takken, Todd (Brewster, NY) [Brewster, NY; Vranas, Pavlos (Bedford Hills, NY) [Bedford Hills, NY
2008-06-03
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.
Ginting, Victor
2014-03-15
it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.
Out-of-plane ultrasonic velocity measurement
Hall, Maclin S. (Marietta, GA); Brodeur, Pierre H. (Smyrna, GA); Jackson, Theodore G. (Atlanta, GA)
1998-01-01
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.
Out-of-plane ultrasonic velocity measurement
Hall, M.S.; Brodeur, P.H.; Jackson, T.G.
1998-07-14
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.
Walker, Scottie Wayne
1995-01-01
suffer from inherent statistical errors. Therefore, Monte Carlo estimates of absorbed dose should always, whenever possible, be compared to actual measurements to verify their accuracy. This study concentrated on the use of a gelatin-based volumetric...
Measurement calibration/tuning & topology processing in power system state estimation
Zhong, Shan
2005-02-17
State estimation plays an important role in modern power systems. The errors in the telemetered measurements and the connectivity information of the network will greatly contaminate the estimated system state. This dissertation provides solutions...
The accuracy of miniature bead thermistors in the measurement of upper air temperature
Thompson, Donald C. (Donald Charles), 1933-
1967-01-01
A laboratory study was made of the errors of miniature bead thermistors of 5, 10, and 15 mils nominal diameter when used for the measurement of atmospheric temperature. Although the study was primarily concerned with the ...
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Modern Palliative Radiation Treatment: Do Complexity and Workload Contribute to Medical Errors?
D'Souza, Neil; Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario ; Holden, Lori; Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario ; Robson, Sheila; Mah, Kathy; Di Prospero, Lisa; Wong, C. Shun; Chow, Edward; Spayne, Jacqueline; Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario
2012-09-01
Purpose: To examine whether treatment workload and complexity associated with palliative radiation therapy contribute to medical errors. Methods and Materials: In the setting of a large academic health sciences center, patient scheduling and record and verification systems were used to identify patients starting radiation therapy. All records of radiation treatment courses delivered during a 3-month period were retrieved and divided into radical and palliative intent. 'Same day consultation, planning and treatment' was used as a proxy for workload and 'previous treatment' and 'multiple sites' as surrogates for complexity. In addition, all planning and treatment discrepancies (errors and 'near-misses') recorded during the same time frame were reviewed and analyzed. Results: There were 365 new patients treated with 485 courses of palliative radiation therapy. Of those patients, 128 (35%) were same-day consultation, simulation, and treatment patients; 166 (45%) patients had previous treatment; and 94 (26%) patients had treatment to multiple sites. Four near-misses and 4 errors occurred during the audit period, giving an error per course rate of 0.82%. In comparison, there were 10 near-misses and 5 errors associated with 1100 courses of radical treatment during the audit period. This translated into an error rate of 0.45% per course. An association was found between workload and complexity and increased palliative therapy error rates. Conclusions: Increased complexity and workload may have an impact on palliative radiation treatment discrepancies. This information may help guide the necessary recommendations for process improvement for patients who require palliative radiation therapy.
Evans, Suzanne B.; Yu, James B.; Chagpar, Anees
2012-10-01
Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses: 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.
Performance and Error Analysis of Knill's Postselection Scheme in a Two-Dimensional Architecture
Ching-Yi Lai; Gerardo Paz; Martin Suchara; Todd A. Brun
2013-05-31
Knill demonstrated a fault-tolerant quantum computation scheme based on concatenated error-detecting codes and postselection with a simulated error threshold of 3% over the depolarizing channel. %We design a two-dimensional architecture for fault-tolerant quantum computation based on Knill's postselection scheme. We show how to use Knill's postselection scheme in a practical two-dimensional quantum architecture that we designed with the goal to optimize the error correction properties, while satisfying important architectural constraints. In our 2D architecture, one logical qubit is embedded in a tile consisting of $5\\times 5$ physical qubits. The movement of these qubits is modeled as noisy SWAP gates and the only physical operations that are allowed are local one- and two-qubit gates. We evaluate the practical properties of our design, such as its error threshold, and compare it to the concatenated Bacon-Shor code and the concatenated Steane code. Assuming that all gates have the same error rates, we obtain a threshold of $3.06\\times 10^{-4}$ in a local adversarial stochastic noise model, which is the highest known error threshold for concatenated codes in 2D. We also present a Monte Carlo simulation of the 2D architecture with depolarizing noise and we calculate a pseudo-threshold of about 0.1%. With memory error rates one-tenth of the worst gate error rates, the threshold for the adversarial noise model, and the pseudo-threshold over depolarizing noise, are $4.06\\times 10^{-4}$ and 0.2%, respectively. In a hypothetical technology where memory error rates are negligible, these thresholds can be further increased by shrinking the tiles into a $4\\times 4$ layout.
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Sample size in factor analysis: The role of model error
MacCallum, R. C.; Widaman, K. F.; Preacher, Kristopher J.; Hong, Sehee
2001-01-01
Equation 1: (2) H9018 yy = H9011H9021H9011H11032 + H9008 2 where H9018 yy is the p ? p population covariance matrix for the measured variables and H9021 is the r ? r population correlation matrix for the common factors (assuming factors are standardized... in the population). This is the standard version of the common factor model for a population covariance matrix. Following similar algebraic procedures, we could derive a structure for a sample covariance matrix, C yy . However, in such a derivation we can...
Error Detection Techniques Applicable in an Architecture Framework and Design Methodology for
Ould Ahmedou, Mohameden
/environmental variations and external radiation caus- ing so-called soft-errors. Overall, these trends result in a severe in analogy to the IP library of the functional layer shall eventually represent an autonomic IP library (AE
A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM
A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM. Engrg. (to appear) Abstract. the continuous casting problem is a convectiondominated nonlinearly, continuous casting, method of characteristics, convec tion dominated diffusion, degenerate parabolic
Title and author(s) Notes on Human Error Analysis and
calibration and testing as found in the US Licensee Event Reports. Available on request from Risø Library JUDGEMENT 4 "HUMAN ERROR" - DEFINITION AND CLASSIFICATION 6 RELIABILITY AND SAFETY ANALYSIS 10 HUMAN FACTORS
From prediction error to incentive salience: mesolimbic computation of reward motivation
Berridge, Kent
From prediction error to incentive salience: mesolimbic computation of reward motivation Kent C separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally
Benestad, R E
2013-01-01
Comment on Scafetta, Nicola. 'Discussion on Common Errors in Analyzing Sea Level Accelerations, Solar Trends and Global Warming.' arXiv:1305.2812 (May 13, 2013a). doi:10.5194/prp-1-37-2013.
Grid-search event location with non-Gaussian error models
Rodi, William L.
This study employs an event location algorithm based on grid search to investigate the possibility of improving seismic event location accuracy by using non-Gaussian error models. The primary departure from the Gaussian ...
Ability of stabilizer quantum error correction to protect itself from its own imperfection
Yuichiro Fujiwara
2014-12-02
The theory of stabilizer quantum error correction allows us to actively stabilize quantum states and simulate ideal quantum operations in a noisy environment. It is critical is to correctly diagnose noise from its syndrome and nullify it accordingly. However, hardware that performs quantum error correction itself is inevitably imperfect in practice. Here, we show that stabilizer codes possess a built-in capability of correcting errors not only on quantum information but also on faulty syndromes extracted by themselves. Shor's syndrome extraction for fault-tolerant quantum computation is naturally improved. This opens a path to realizing the potential of stabilizer quantum error correction hidden within an innocent looking choice of generators and stabilizer operators that have been deemed redundant.
Development of methodology to correct sampling error associated with FRM PM10 samplers
Chen, Jing
2009-05-15
to correct the sampling error associated with the FRM PM10 sampler: (1) wind tunnel testing facilities and protocol for experimental evaluation of samplers; (2) the variation of the oversampling ratios of FRM PM10 samplers for computational evaluation...
297 Copyright 2007 Psychonomic Society, Inc. Cross-task individual differences in error
Curran, Tim
, Arizona and christopher d'lauro and tiM curran University of Colorado, Boulder, Colorado The error, including the online detection and bias (positive learners; Frank, Woroch, & Curran, 2005). correction
Analysis of atmospheric delays and asymmetric positioning errors in the global positioning system
Materna, Kathryn
2014-01-01
Abstract Errors in modeling atmospheric delays are one of the limiting factors in the accuracy of GPS position determination. In regions with uneven topography, atmospheric delay phenomena can be especially complicated. ...
Verifica(on of Hurricane Irene, Isaac and Sandy's Storm Track, Intensity, and Wind Radii Errors
Miami, University of
Verifica(on of Hurricane Irene, Isaac and Sandy's Storm Track, Intensity/onal Hurricane Center (NHC). Forecasts of the track have steadily improved over the past, intensity (MWND) and wind radii (WRAD) errors of Hurricane Irene (2011
Effects of systematic phase errors on optimized quantum random-walk search algorithm
Yu-Chao Zhang; Wan-Su Bao; Xiang Wang; Xiang-Qun Fu
2015-01-09
This paper researches how the systematic errors in phase inversions affect the success rate and the number of iterations in optimized quantum random-walk search algorithm. Through geometric description of this algorithm, the model of the algorithm with phase errors is established and the relationship between the success rate of the algorithm, the database size, the number of iterations and the phase error is depicted. For a given sized database, we give both the maximum success rate of the algorithm and the required number of iterations when the algorithm is in the presence of phase errors. Through analysis and numerical simulations, it shows that optimized quantum random-walk search algorithm is more robust than Grover's algorithm.
The Effect of OCR Errors on Stylistic Text Classification Sterling Stuart Stein
The Effect of OCR Errors on Stylistic Text Classification Sterling Stuart Stein Linguistic retrieval; Taghva and Coombs [1] found that a search engine could be made to work well over OCR documents
Ritz-Volterra Reconstructions and A Posteriori Error Analysis of Finite Element Method for Parabolic
Ewing, Richard E.
conduction in material with memory [10], the compression of poro-viscoelasticity media [11], nuclear reactor meshing procedures de- signed to control and minimize the error. Over the last two decades, a posteriori
Efficient error correction for speech systems using constrained re-recognition
Yu, Gregory T
2008-01-01
Efficient error correction of recognition output is a major barrier in the adoption of speech interfaces. This thesis addresses this problem through a novel correction framework and user interface. The system uses constraints ...
On the evaluation of human error probabilities for post-initiating events
Presley, Mary R
2006-01-01
Quantification of human error probabilities (HEPs) for the purpose of human reliability assessment (HRA) is very complex. Because of this complexity, the state of the art includes a variety of HRA models, each with its own ...
Locatelli, R.
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...
V-109: Google Chrome WebKit Type Confusion Error Lets Remote...
Broader source: Energy.gov (indexed) [DOE]
to be executed on the target user's system. A remote user can create specially crafted HTML that, when loaded by the target user, will trigger a type confusion error in WebKit and...
Scher, Aaron David
2005-08-29
In this thesis, two separate research topics are undertaken both in the general area of compact RF/microwave circuit design. The first topic involves characterizing the parasitic effects and error due to unused post-production tuning bars...
Niyogi, Partha
1994-02-01
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show ...
Modifed Minimum Classification Error Learning and Its Application to Neural Networks
Shimodaira, Hiroshi; Rokui, Jun; Nakai, Mitsuru
A novel method to improve the generalization performance of the Minimum Classification Error (MCE) / Generalized Probabilistic Descent (GPD) learning is proposed. The MCE/GPD learning proposed by Juang and Katagiri in 1992 ...
Gilles Lachaud For detecting and correcting the inevitable errors which creep in during
Provence Aix-Marseille I, Université de
Gilles Lachaud For detecting and correcting the inevitable errors which creep in during digital by the grea- test possible number of discs of the same size without any overlaps. #12;The words of a message
Combined wavelet video coding and error control for internet streaming and multicast
Chu, Tianli
2002-01-01
In the past several years, advances in Internet video streaming have been tremendous. Originally designed without error protection, Receiver-driven layered multicast (RLM) has proved to be a very effective scheme for scalable video multicast. Though...
Willis, W.L.
1980-10-01
The discussion will be restricted to measurements of voltage and current. Also, although the measurements themselves should be as quantitative as possible, the discussion is rather nonquantitative. Emphasis is on types of instruments, how they may be used, and the inherent advantages and limitations of a given technique. A great deal of information can be obtained from good, clean voltage and current data. Power and impedance are obviously inherent if the proper time relationships are preserved. Often an associated, difficult-to-determine, physical event can be evaluated from the V-I data, such as a time-varying load characteristic, or the time of light emission, etc. The lack of active high voltage devices, such as 50-kV operational amplifiers, restricts measurement devices to passive elements, primarily R and C. There are a few more exotic techniques that are still passive in nature. There are several well-developed techniques for voltage measurements. These include: spark gaps; electrostatic meters; capacitive dividers; mixed RC dividers; and the electro-optic effect. Current is measured by either direct measurement of charge flow or by measuring the resulting magnetic field.
Havinga, Paul J.M.
Abstract -- Since high error rates are inevitable to the wireless environment, energy mechanisms only, but the required extra energy consumed by the wireless interface should be incorporated energy consumption is a key issue for portable wireless network devices like computers like PDAs
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
Freeman, J.; Whitmore, J.; Blair, N.; Dobos, A. P.
2014-08-01
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% for all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.
Design consistency and driver error as reflected by driver workload and accident rates
Wooldridge, Mark Douglas
1992-01-01
DESIGN CONSISTENCY AND DRIVER ERROR AS REFLECTED BY DRIVER WORKLOAD AND ACCIDENT RATES A Thesis by MARK DOUGLAS WOOLDRIDGE Approved as to style and content by: Daniel B. Fambro (Chair of Committee) Raymond A. Krammes (Member) Olga J.... Pendleton (Member) James T. P. Yao (Head of Department) May 1992 ABSTRACT Design Consistency and Driver Error as Reflected by Driver Workload and Accident Rates (May 1992) Mark Douglas Wooldridge, B. S. , Texas A&M University Chair of Advisory...
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Confirmation of standard error analysis techniques applied to EXAFS using simulations
Booth, Corwin H; Hu, Yung-Jin
2009-12-14
Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.
Science and Engineering, York University, 4700 Keele St., Toronto, Canada M3J 1P3 Dong Sun Department with elliptical refer- ence orbits. It can guarantee that both the tracking errors and the synchronization errors strategy for MSFF. This controller can guarantee the convergences of both the relative position tracking
Processing Quantities with Heavy-Tailed Distribution of Measurement Uncertainty: How
Kreinovich, Vladik
Processing Quantities with Heavy-Tailed Distribution of Measurement Uncertainty: How to Estimate, the distribution of measurement errors is sometimes heavy-tailed, when very large values have a reasonable, in the amount of oil in an oil well, etc. In such situations in which we cannot measure y directly, we can often
Taming Wild Behavior: The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from
Wobbrock, Jacob O.
Taming Wild Behavior: The Input Observer for Obtaining Text Entry and Mouse Pointing Measures from that can run quietly in the background of users' computers and measure their text entry and mouse pointing to segment text entry and mouse pointing input streams into "trials." We are the first to measure errors
Software Productivity Measurement Using Multiple Size Measures
Bae, Doo-Hwan
Software Productivity Measurement Using Multiple Size Measures Software Productivity MeasurementContents Introduction Background Related work Motivation Productivity measurement - Measurement model - Productivity measure construction - Productivity analysis Conclusion Discussion #12;Software Engineering Lab, KAIST 3
Wilinska, Malgorzata E.; Hovorka, Roman
2014-10-07
were reversed. The Yale protocol was most balanced with 50% of time spent in a tight glucose range and a low risk of hypoglycemia. Additional analyses contrasted CGM imprecision and bias. In agreement with Boyd and Bruns [20], we observed a... observations in an appropriately designed clinical study is desirable but is logistically and ethically challenging. Accuracy of glucose meters in the ICU has been studied extensively [10-13] although accuracy guidelines and standards are being debated...
Ulrike Herzog
2012-09-25
We study an optimum measurement for quantum state discrimination, which maximizes the probability of correct results when the probability of inconclusive results is fixed at a given value. The measurement describes minimum-error discrimination if this value is zero, while under certain conditions it corresponds to optimized maximum-confidence discrimination, or to optimum unambiguous discrimination, respectively, when the fixed value reaches a definite minimum. Using operator conditions that determine the optimum measurement, we derive analytical solutions for the discrimination of two mixed qubit states, including the case of two pure states occurring with arbitrary prior probabilities, and for the discrimination of N symmetric states, both pure and mixed. We also consider a case where the given density operators resolve the identity operator, and we specify the optimality conditions for the case of partially symmetric states. Moreover, we show that from the complete solution for arbitrary values of the fixed rate of inconclusive results one can always obtain the optimum measurement in another strategy where the error rate is fixed, and vice versa.
Huang, Weidong
2011-01-01
This paper presents the general equation to calculate the standard deviation of reflected ray error from optical error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 8 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope errors in two direction is transferred to any one direction of the focus ray when the incidence angle is more than 0 for solar trough and heliostats reflector; for point focus Fresnel lens, point focus parabolic glass mirror, line focus parabolic galss mirror, the error transferring coefficient from optical to focus ray will increase when the rim angle increase; for TIR-R concentrator, it will decrease; for glass heliostat, it relates to the incidence angle and azimuth of the reflecting point. Keywords: optic error, standard deviation, refractive ray error, concentrated solar collector
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration wouldMass map shines light on dark matter By SarahMODELING CLOUD1 H( 7Measurements ofMeasurement
Measurements of electron temperature in the ionosphere using a low-frequency impedance meter
Aksenov, V.I.; Modestov, A.P.; Sokolov, L.Yu.
1987-07-01
Two ways of measuring the electron temperature in the ionosphere are proposed, based on measurements in the low-frequency range of the impedance of an electrical whip antenna mounted on an earth satellite. The errors are analyzed and the sources of possible systematic errors and methods of allowing for them are discussed. Some results of measurements of electron temperature on the Interkosmos-Kopernik 500 satellite, made in the period of the autumnal equinox at temperate latitudes during a solar activity minimum, are given. The data obtained are in fully satisfactory agreement with the results of measurements of electron temperature by other methods known from the literature.
TIM3 Front-Panel 1. VE: Flash for VME bus access error OR On for Geog-Addr error (i.e. wrong slot).
University College London
-Busy (note: In Stand-Alone Mode TIM is normally busy) 4. TB: Shows status of TIM-BusyOut All LEDs (apart from power supplies) have a 60ms pulse stretcher for better visibility. -5 -12 OR VE SA SC TB CA BR SP +5 +3 Error1 -5V, -12V Power On Stand-Alone Mode Enabled Stand-Alone Clock Present TIM BusyOut4 ROD Busy's (1
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
WIND ATLAS FOR EGYPT: MEASUREMENTS, MICRO-AND MESOSCALE MODELLING
WIND ATLAS FOR EGYPT: MEASUREMENTS, MICRO- AND MESOSCALE MODELLING Niels G. Mortensen1 , Jens atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing atlas based on long-term reanalysis data and a mesoscale model, KAMM. The observations have been
S. D. Bloom; D. A. Dale; R. Cool; K. Dupczak; C. Miller; A. Haugsjaa; C. Peters; M. Tornikoski; P. Wallace; M. Pierce
2004-04-02
We present the most recent results of an optical survey of the position error contours ("error boxes") of unidentified high energy gamma-ray sources.
Thorough approach to measurement uncertainty analysis applied to immersed heat exchanger testing
Farrington, R.B.; Wells, C.V.
1986-04-01
This paper discusses the value of an uncertainty analysis, discusses how to determine measurement uncertainty, and then details the sources of error in instrument calibration, data acquisition, and data reduction for a particular experiment. Methods are discussed to determine both the systematic (or bias) error in an experiment as well as to determine the random (or precision) error in the experiment. The detailed analysis is applied to two sets of conditions in measuring the effectiveness of an immersed coil heat exchanger. It shows the value of such analysis as well as an approach to reduce overall measurement uncertainty and to improve the experiment. This paper outlines how to perform an uncertainty analysis and then provides a detailed example of how to apply the methods discussed in the paper. The authors hope this paper will encourage researchers and others to become more concerned with their measurement processes and to report measurement uncertainty with all of their test results.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Chakrabarty, Krishnendu
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 52, NO. 5, OCTOBER 2003 1353 Space for which compact test sets are available. Index Terms--Embedded cores, error propagation, nonmodeled faults
Hodge, B. M.; Lew, D.; Milligan, M.
2013-01-01
Load forecasting in the day-ahead timescale is a critical aspect of power system operations that is used in the unit commitment process. It is also an important factor in renewable energy integration studies, where the combination of load and wind or solar forecasting techniques create the net load uncertainty that must be managed by the economic dispatch process or with suitable reserves. An understanding of that load forecasting errors that may be expected in this process can lead to better decisions about the amount of reserves necessary to compensate errors. In this work, we performed a statistical analysis of the day-ahead (and two-day-ahead) load forecasting errors observed in two independent system operators for a one-year period. Comparisons were made with the normal distribution commonly assumed in power system operation simulations used for renewable power integration studies. Further analysis identified time periods when the load is more likely to be under- or overforecast.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G. (Mount Kisco, NY); Chen, Dong (Croton On Husdon, NY); Coteus, Paul W. (Yorktown Heights, NY); Flynn, William T. (Rochester, MN); Marcella, James A. (Rochester, MN); Takken, Todd (Brewster, NY); Trager, Barry M. (Yorktown Heights, NY); Winograd, Shmuel (Scarsdale, NY)
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD
Harold S. Blackman; David I. Gertman; Ronald L. Boring
2008-09-01
This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.
Stability and error analysis of nodal expansion method for convection-diffusion equation
Deng, Z.; Rizwan-Uddin; Li, F.; Sun, Y.
2012-07-01
The development, and stability and error analyses of nodal expansion method (NEM) for one dimensional steady-state convection diffusion equation is presented. Following the traditional procedure to develop NEM, the discrete formulation of the convection-diffusion equation, which is similar to the standard finite difference scheme, is derived. The method of discrete perturbation analysis is applied to this discrete form to study the stability of the NEM. The scheme based on the NEM is found to be stable for local Peclet number less than 4.644. A maximum principle is proved for the NEM scheme, followed by an error analysis carried out by applying the Maximum principle together with a carefully constructed comparison function. The scheme for the convection diffusion equation is of second-order. Numerical experiments are carried and the results agree with the conclusions of the stability and error analyses. (authors)
Error-rejecting quantum computing with solid state spins assisted by low-Q optical microcavities
Tao Li; Fu-Guo Deng
2015-10-31
We present an efficient proposal for error-rejecting quantum computing with quantum dots (QD) embedded in single-sided optical microcavities based on the interface between the circularly-polarized photon and the QDs. A unity fidelity of the quantum entangling gate (EG) can be implemented with a detectable error that leads to a recycling EG procedure, which improves further the efficiency of our proposal for EG along with robustness to the errors involved in the imperfect input-output process. Meanwhile, we discuss the performance of our proposal for EG on two solid state spins with currently achieved experiment parameters, showing that it is feasible with current experimental technology. It provides a promising building block for solid-state quantum computing and quantum networks.
Fade-resistant forward error correction method for free-space optical communications systems
Johnson, Gary W. (Livermore, CA); Dowla, Farid U. (Castro Valley, CA); Ruggiero, Anthony J. (Livermore, CA)
2007-10-02
Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.
Aguilar-Arevalo, A A; Bazarko, A O; Brice, S J; Brown, B C; Bugel, L; Cao, J; Coney, L; Conrad, J M; Cox, D C; Curioni, A; Djurcic, Z; Finley, D A; Fleming, B T; Ford, R; Garcia, F G; Garvey, G T; Gonzales, J; Grange, J; Green, C; Green, J A; Hart, T L; Hawker, E; Imlay, R; Johnson, R A; Karagiorgi, G; Kasper, P; Katori, T; Kobilarcik, T; Kourbanis, I; Koutsoliotas, S; Laird, E M; Linden, S K; Link, J M; Liu, Y; Louis, W C; Mahn, K B M; Marsh, W; Mauger, C; McGary, V T; McGregor, G; Metcalf, W; Meyers, P D; Mills, F; Mills, G B; Monroe, J; Moore, C D; Mousseau, J; Nelson, R H; Nienaber, P; Nowak, J A; Osmanov, B; Ouedraogo, S; Patterson, R B; Pavlovic, Z; Perevalov, D; Polly, C C; Prebys, E; Raaf, J L; Ray, H; Roe, B P; Russell, A D; Sandberg, V; Schirato, R; Schmitz, D; Shaevitz, M H; Shoemaker, F C; Smith, D; Soderberg, M; Sorel, M; Spentzouris, P; Spitz, J; Stancu, I; Stefanski, R J; Sung, M; Tanaka, H A; Tayloe, R; Tzanov, M; Van de Water, R G; Wascko, M O; White, D H; Wilking, M J; Yang, H J; Zeller, G P; Zimmerman, E D
2009-01-01
MiniBooNE reports the first absolute cross sections for neutral current single \\pi^0 production on CH_2 induced by neutrino and antineutrino interactions measured from the largest sets of NC \\pi^0 events collected to date. The principal result consists of differential cross sections measured as functions of \\pi^0 momentum and \\pi^0 angle averaged over the neutrino flux at MiniBooNE. We find total cross sections of (4.76+/-0.05_{stat}+/-0.40_{sys})*10^{-40} cm^2/nucleon at a mean energy of =808 MeV and (1.48+/-0.05_{stat}+/-0.14_{sys})*10^{-40} cm^2/nucleon at a mean energy of =664 MeV for \
Estimating rock properties in two phase petroleum reservoirs: an error analysis
Paul, Anthony Ian
1983-01-01
by the same amount from the true porosity value. In Fig. 5, the objective function is slightly better represented by the series approximation in 1/4. A Monte Carlo study was performed using the same history matching conditions as for the permeability... estimates were used in a Monte Carlo study to calculate the predicted well values, after the history matching period. The errors in the rock property estimates increases rapidly with an increasing number of unknowns. In many cases, even when large errors...
Sequence decoding in the presence of timing errors for NRZ signaling
Kinard, Barbara Kay
1990-01-01
SEQUENCE DECODING IN THE PRESENCE OF TIMING ERRORS FOR NRZ SIGNALING A Thesis by BARBARA KAY KINARD Submitted to the Office of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 1990 Major Subject: Electrical Engineering SEQUENC'E DECODING IN THE PRESENCE OF TIMIVG ERRORS FOR NRZ SIGNALING A Thesis by BARBARA I&AY KINARD Approved as to style and content by: ostas N. Georg iades (C'hair of C'ommittee) i...
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Outreach HomeA Better Anode DesigngovCampaignsSpring SinglegovField CampaignsMidlatitudegovMeasurementsCloud
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Comments?govInstrumentsnoaacrnBarrow, Alaska Outreach Homepolarization ARMtotalgovMeasurementsVisibility
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Comments?govInstrumentsnoaacrnBarrow, Alaska Outreach HomepolarizationMeasurements Related Links RACORO Home AAF
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Comments?govInstrumentsnoaacrnBarrow, Alaska Outreach HomepolarizationMeasurements Related Links RACORO Home
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Comments?govInstrumentsnoaacrnBarrow, Alaska Outreach HomepolarizationMeasurements Related Links RACORO
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefieldSulfateSciTechtail.Theory of rare Kaonforsupernovae2GatheringARMHistory and StatusgovMeasurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore »we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Norton, David Jerry
1963-01-01
ower load =0 40 b 0 p 9 -4. I 4. 2 4 ~ compreSSlo n r a?o FI 6 I 2, EFFI'C I ENC Y II S. COMPRESSION RATIO 17 Fig. 13. Schematic drawing of the mechanical system. The equation of motion for this familiar system is, ~ I NX + C(X-Y) + K(X-Y) = F...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power AdministrationRobust,Field-effectWorking With LivermoreSustainable Landmimic key features(Technical Report)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration would like submitKansasCommunitiesof Energy ServicesEnergy4thwrites out the header html. We
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration would like submitKansasCommunitiesof Energy ServicesEnergy4thwrites out the header html.
COS FUV01 Detector Errors and Recommended Actions Date: July 30, 2001
Colorado at Boulder, University of
COS FUV01 Detector Errors and Recommended Actions Date: July 30, 2001 Document Number: COS-11-0032 Revision: Initial Release Contract No.: NAS5-98043 CDRL No.: SE-05 Prepared By: K. Brownsberger, COS Sr. Software Scientist, CU/CASA Date Reviewed By: J. McPhate, COS FUV Detector Scientist, UCB Date Reviewed By
Locally Testing Direct Products in the Low Error Range Weizmann Institute
Dinur, Irit
acceptance probability of the test. We show that even if the test passes with small probability, > 0Locally Testing Direct Products in the Low Error Range Irit Dinur Weizmann Institute Dept Given a function f : X , its -wise direct prod- uct is the function F = f : X defined by: F(x1
GRAVITY ERROR COMPENSATION USING SECOND-ORDER GAUSS-MARKOV PROCESSES
Born, George
AAS 11-502 GRAVITY ERROR COMPENSATION USING SECOND-ORDER GAUSS-MARKOV PROCESSES Jason M. Leonard the use of a second-order Gauss-Markov process to compensate for higher order spherical harmonic gravity an improvement in POD through the use of a second-order Gauss-Markov process (GMP2) for modeling J3 gravity
A. Frommer; K. Kahl; Th. Lippert; H. Rittich
2012-12-03
The Lanczos process constructs a sequence of orthonormal vectors v_m spanning a nested sequence of Krylov subspaces generated by a hermitian matrix A and some starting vector b. In this paper we show how to cheaply recover a secondary Lanczos process starting at an arbitrary Lanczos vector v_m. This secondary process is then used to efficiently obtain computable error estimates and error bounds for the Lanczos approximations to the action of a rational matrix function on a vector. This includes, as a special case, the Lanczos approximation to the solution of a linear system Ax = b. Our approach uses the relation between the Lanczos process and quadrature as developed by Golub and Meurant. It is different from methods known so far because of its use of the secondary Lanczos process. With our approach, it is now in particular possible to efficiently obtain {\\em upper bounds} for the error in the {\\em 2-norm}, provided a lower bound on the smallest eigenvalue of $A$ is known. This holds in particular for a large class of rational matrix functions including best rational approximations to the inverse square root and the sign function. We compare our approach to other existing error estimates and bounds known from the literature and include results of several numerical experiments.
Random vs. Deterministic Deployment of Sensors in the Presence of Failures and Placement Errors
Kumar, Santosh
Random vs. Deterministic Deployment of Sensors in the Presence of Failures and Placement Errors, and evaluation of various algorithms (e.g., sleep-wakeup), it has often been considered too expensive as compared to optimal deterministic deployment patterns when deploying sensors in real-life. Roughly speaking, a factor
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Vibrotactile Feedback in Steering Wheel Reduces Navigation Errors during GPS-Guided Car Driving
Basdogan, Cagatay
Vibrotactile Feedback in Steering Wheel Reduces Navigation Errors during GPS-Guided Car Driving feedback displayed through the steering wheel of a car can reduce the perceptual and cognitive load with the GPS-based voice commands. KEYWORDS: vibrotactile, haptics, car navigation systems, GPS, steering wheel
The Influence of Source and Cost of Information Access on Correct and Errorful Interactive Behavior
Gray, Wayne
USA +1 703 993 1357 gray@gmu.edu ABSTRACT Routine interactive behavior reveals patterns of interactionThe Influence of Source and Cost of Information Access on Correct and Errorful Interactive Behavior Wayne D. Gray & Wai-Tat Fu Human Factors & Applied Cognition George Mason University Fairfax, VA 22030
Back-and-forth Operation of State Observers and Norm Estimation of Estimation Error
Back-and-forth Operation of State Observers and Norm Estimation of Estimation Error Hyungbo Shim with the plant, this paper proposes a state estimation algorithm that executes Luenberger observers in a back in the past have employed time-varying gains to over- come this problem [1], where the basic idea is to obtain
Practical Error Estimates for Reynolds' Lubrication Approximation and its Higher Order Corrections
Jon Wilkening
2010-06-09
Reynolds' lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds' equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio $\\epsilon$ of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, $x$-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function $h(x)$ describing the geometry, or depend on $h$ and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order $2k$, the error is $O(\\epsilon^{2k+2})$ and $h$ enters into the error bound only through its first and third inverse moments $\\int_0^1 h(x)^{-m} dx$, $m=1,3$ and via the max norms $\\big\\|\\frac{1}{\\ell!} h^{\\ell-1} \\partial_x^\\ell h\\big\\|_\\infty$, $1\\le\\ell\\le2k+2$. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when $h$ is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks
Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks Jeremy M. Wolfe are much higher at low target prevalence (1%2%) than at high prevalence (50%). Unfortunately, low periods of low prevalence with no feedback. Keywords: attention, visual search, airport security, low
American Journal of Botany 88(6): 10961102. 2001. HABITAT-RELATED ERROR IN ESTIMATING
Wilf, Peter
for this habitat variation to introduce error into temperature reconstructions, based on field data from a modern proportion of liana species with toothed leaves in lakeside and riverside samples appears to be responsible forests between the proportion of woody dicotyledonous spe- cies with entire-margined leaves in a flora
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent on
Santhanam, Balu
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent. Stability and convergence time issues of concern for ill- conditioned inputs. Leaky LMS AlgorithmLeaky LMS cost. Block LMS AlgorithmBlock LMS Algorithm Uses type-I polyphase components of the input u[n]: Block
Validating SystemLevel Error Recovery for Spacecraft Robyn R. Lutz \\Lambda
Lutz, Robyn R.
executions of the error recovery software with the software that controls the science and engineering events these intercommand constraints. A failure to do so can jeopardize the collection of scientific data, a spacecraft the ground and stored in the spacecraft's temporary memory until the time comes for each command
Iterative Dense Correspondence Correction Through Bundle Adjustment Feedback-Based Error Detection
Hess-Flores, M A; Duchaineau, M A; Goldman, M J; Joy, K I
2009-11-23
A novel method to detect and correct inaccuracies in a set of unconstrained dense correspondences between two images is presented. Starting with a robust, general-purpose dense correspondence algorithm, an initial pose estimate and dense 3D scene reconstruction are obtained and bundle-adjusted. Reprojection errors are then computed for each correspondence pair, which is used as a metric to distinguish high and low-error correspondences. An affine neighborhood-based coarse-to-fine iterative search algorithm is then applied only on the high-error correspondences to correct their positions. Such an error detection and correction mechanism is novel for unconstrained dense correspondences, for example not obtained through epipolar geometry-based guided matching. Results indicate that correspondences in regions with issues such as occlusions, repetitive patterns and moving objects can be identified and corrected, such that a more accurate set of dense correspondences results from the feedback-based process, as proven by more accurate pose and structure estimates.
Heald, Colette L.
Quantifying the impact of model errors on topdown estimates of carbon monoxide emissions using use of inverse modeling to better quantify regional surface emissions of carbon monoxide (CO), which to or larger than the combustion source, optimizing the CO from NMVOC emissions on larger spatial scales than
Goal{Oriented A Posteriori Error Estimation for Multiple Target Functionals
Hartmann, Ralf
Goal{Oriented A Posteriori Error Estimation for Multiple Target Functionals Ralf Hartmann 1, D-69120 Heidelberg, Germany Ralf.Hartmann@iwr.uni-heidelberg.de 2 Department of Mathematics Hartmann and Paul Houston cannot be solved in closed form but needs to be approximated numerically
Goal{Oriented A Posteriori Error Estimation for Compressible Fluid Flows
Hartmann, Ralf
Goal{Oriented A Posteriori Error Estimation for Compressible Fluid Flows Ralf Hartmann 1;? and Paul-69120 Heidelberg, Germany. e-mail: Ralf.Hartmann@iwr.uni-heidelberg.de 2 Department of Mathematics of Heidelberg ?? Paul Houston acknowledges the #12;nancial support of the EPSRC (GR/N24230). #12; 2 R. Hartmann
Liu, Hongyu
IN A CHEMICAL TRANSPORT MODEL Abstract. We propose a new methodology to characterize errors in chemical forecasts from a global tropospheric chemical transport model I. Bey Swiss Federal Institute in the representation of transport processes in chemical transport models. We con- strain the evaluation of a global
ERRORS IN VIKING LANDER ATMOSPHERIC PROFILES DISCOVERED USING MOLA TOPOGRAPHY. Paul Withers1
Withers, Paul
ERRORS IN VIKING LANDER ATMOSPHERIC PROFILES DISCOVERED USING MOLA TOPOGRAPHY. Paul Withers1 , R. D above the spatially-varying martian topography, were used to constrain the reconstructed trajectory of martian topography pro- vided by the laser altimeter (MOLA) aboard the Mars Global Surveyor spacecraft
Approximations for Bit Error Probabilities in SSMA Communication Systems Using Spreading
Keller, Gerhard
Approximations for Bit Error Probabilities in SSMA Communication Systems Using Spreading Sequences@mi.uni-erlangen.de Abstract-- In previous research, we considered SSMA (spread spectrum multiple access) communication systems of spread spectrum multiple access (SSMA) communication systems, the standard Gaussian approximation (SGA
DRAM Errors in the Wild: A Large-Scale Field Study Bianca Schroeder
Toronto, University of
University of Toronto Toronto, Canada bianca@cs.toronto.edu Eduardo Pinheiro Google Inc. Mountain View, CA Wolf-Dietrich Weber Google Inc. Mountain View, CA ABSTRACT Errors in dynamic random access memory (DRAM that copies are not made or distributed for profit or commercial advantage and that copies bear this notice
Renaut, Rosemary
describe the performance of a solid oxide fuel cell requires the solution of an inverse problem. Two at the electrodeelectrolyte interfaces of solid oxide fuel cells (SOFC) is investigated physically using ElectrochemicalStability and error analysis of the polarization estimation inverse problem for solid oxide fuel
Allocating data for broadcasting over wireless channels subject to transmission errors
Pinotti, Maria Cristina
quotes, weather infos, traffic news, where data are continuously broadcast to clients that may desire them at any instant of time. In this scenario, a server at the base-station repeatedly transmits dataAllocating data for broadcasting over wireless channels subject to transmission errors Paolo
Bayesian Design for the Normal Linear Model with Unknown Error Variance
of specific design criteria to specific prior assumptions on the variance has been demonstrated, but a general, 1985; Pilz, 1991) defined Bayesian optimal design criteria as functions OE(X) of the posteriorBayesian Design for the Normal Linear Model with Unknown Error Variance Isabella Verdinelli
Patel, Aniruddh D.
Effective Cue Utilization Reduces Memory Errors in Older Adults Ayanna K. Thomas and John B utilization at retrieval. Retention interval and instructions at retrieval were manipulated within at retrieval (i.e., the cue utilization deficit hypothesis). The present study sought to differentiate between
Error Bounds from Extra-Precise Iterative JAMES DEMMEL, YOZO HIDA, and WILLIAM KAHAN
Li, Xiaoye Sherry
prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access error bound for the computed solution. The completion of the new BLAS Technical Forum Standard has was supported in part by the NSF Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF
Kambhampati, Subbarao
Design Methodology to trade off Power, Output Quality and Error Resiliency: Application to Color,nbanerje,kaushik}@purdue.edu chaitali@asu.edu Abstract: Power dissipation and tolerance to process variations pose conflicting design-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems
Error Tolerant Address Configuration for Data Center Networks with Malfunctioning Devices
Chen, Yan
Error Tolerant Address Configuration for Data Center Networks with Malfunctioning Devices Xingyu Ma to correct malfunctions and it can cause substantial operation delay of the whole data center. In this paper benefits because in most cases malfunctions in data centers only account for a very small portion
1997-2001 by M. Kostic Ch.5: Uncertainty/Error Analysis
Kostic, Milivoje M.
1 ©1997-2001 by M. Kostic Ch.5: Uncertainty/Error Analysis · Introduction · Bias and Precision Summation/Propagation (Expanded Combined Uncertainty) · Problem 5-30 ©1997-2001 by M. Kostic Ch.5) at corresponding Probability (%P) Remember: u = d%P = t,%PS (@ %P); z=t=d/S #12;2 ©1997-2001 by M. Kostic Bias
Alexander J. Silenko
2013-08-12
Analysis of spin dynamics in storage ring electric-dipole-moment (EDM) experiments ascertains that the use of initial vertical beam polarization allows to cancel spin-dependent systematical errors imitating the EDM effect. While the use of this polarization meets certain difficulties, it should be considered as an alternative or supplementary possibility of fulfilling the EDM experiment.
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION (SHORT PAPER) Farinaz Koushanfar*
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION (SHORT PAPER) Farinaz Koushanfar* , Sasha Slijepcevic ESN tasks is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined ESN applications, including multi- modal sensor fusion, is to ensure that all of the techniques
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED
Kaplan, Alexey
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED ALTIMETRY MAPS be expressed in terms of the averages over model grid box areas. In reality, however, observations are either differently by the model grid and by the observational system. This difference turns out to be a major
Detection and Prediction of Errors in EPCs of the SAP Reference Model
van der Aalst, Wil
as a blueprint for roll-out projects of SAP's ERP system. It reflects Version18 4.6 of SAP R/3 which was marketedDetection and Prediction of Errors in EPCs of the SAP Reference Model J. Mendling a, H.M.W. Verbeek provide empirical evidence for these questions based on the SAP reference model. This model collection
Lucy, D.; Pollard, A.M. Title: Further comments on the estimation of error
Lucy, David
address. Abstract: Many researchers in the field of forensic odontology have questioned the error with the gustafson dental age estimation method Journal: Journal of Forensic Sciences Date: 1995 Volume: 40(2) Pages of papers into the forensic literature all offering improvements to the basic Gustafson age estimation
Descriptional Complexity of Error Detection Timothy Ng, David Rappaport and Kai Salomaa
Salomaa, Kai T.
by a number of applications, such as specification re- pair [1], computational biology [23], and error detection in communication channels [15, 18]. The Encyclopedia of Distances by Deza and Deza [7] contains, biology, coding theory, image processing, and physics, among others. For each of these definitions, we can
Smoothing Parameter Selection When Errors are Correlated and Application to Ozone Data
Heckman, Nancy E.
Smoothing Parameter Selection When Errors are Correlated and Application to Ozone Data by Robert Jr trend of daily and monthly ground ozone levels in southern Ontario. iii #12; Contents Abstract ii.2 Air Pollution Data . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2.1 Daily Ozone Data . . . . . . . . . . . . . . . . . . .
PUBLISHED IN: PROCEEDINGS OF THE IEEE ICC 2013 1 Towards an Error Control Scheme for a
Chatziantoniou, Damianos
evaluation of its performance. An obvious use case for our scheme is the reliable delivery of softwarePUBLISHED IN: PROCEEDINGS OF THE IEEE ICC 2013 1 Towards an Error Control Scheme for a Publish for efficient content distribution. However, the design of efficient reliable transport protocols for multicast
Walker, Jeff
of its high albedo and thermal and water storage properties. Snow is also the largest varying landscape for the 1990-1991 snow season (November-April) have been examined. Dense vegetation, especially in the taiga snow crystals evolve with the progression of the season also contribute to the errors. In general
Torrellas, Josep
Shield: Cost-Effective Soft Error Protection For Register Files Pablo Montesinos, Wei Liu renamed P10 is considered short P10 is considered long Ponomarev et al, 2004 Short version: a new Montesinos. University of Illinois at Urbana-Champaign ECCTable entry allocation Entries are allocated when
A Scalable Model for Timing Error Prediction under Hardware and Workload Variations
Gupta, Rajesh
Conservative guardbands Efficiency loss Resilient technique: 1) Predict&Prevent 2) Error ignorance Build reduction percentage for the Adder/Multiplier at (0.72V, 0°C)/(0.85V, 50°C) Bench mark Multiplier Adder SQRT/5 - Instruction level guardband reduction percentage at (0.72V, 0°C) / (0.85V, 50°C) regarding different
Critical Charge Characterization for Soft Error Rate Modeling in 90nm SRAM
Draper, Jeff
.witulski}@vanderbilt.edu Abstract-- Due to continuous technology scaling, the reduction of nodal capacitances and the lowering of power supply voltages result in an ever decreasing minimal charge capable of upsetting the logic state fast characteristic timing parameters are shown to result in conservative soft error rate predictions
Temporal Memoization for Energy-Efficient Timing Error Recovery in GPGPUs
Gupta, Rajesh
commonly use conservative guardbands for the operating frequency or voltage to ensure error-free operation therefore enables reduction of the minimum operating voltage [7]. Similarly, in non-volatile memory area%4%) and outperforms recent advances in resilient architectures. This technique also enhances robustness in the voltage
WEB-BASED VISUAL EXPLORATION AND ERROR DETECTION IN LARGE DATA SETS
Köbben, Barend
WEB-BASED VISUAL EXPLORATION AND ERROR DETECTION IN LARGE DATA SETS: ANTARCTIC ICEBERG TRACKING DATA AS A CASE Connie A. Blok, Ulanbek Turdukulov, Barend Köbben, Juan Luis Calle Pomares International The Netherlands blok@itc.nl; turdukulov@itc.nl Abstract Polar iceberg data are amongst others used
Error of the network approximation for densely packed composites with irregular geometry
Novikov, Alexei
the concentration of the filling inclusions is high is particularly relevant to polymer/ceramic composites, because a polymer matrix compensates for the brittle nature of ceramics which is their main weakness. A surveyError of the network approximation for densely packed composites with irregular geometry Leonid
Modeling HSGPS Doppler Errors in Indoor Environments for Pedestrian Dead-Reckoning
Calgary, University of
Modeling HSGPS Doppler Errors in Indoor Environments for Pedestrian Dead-Reckoning Zhe He, Mark The use of high sensitivity GPS (HSGPS) receivers integrated with dead-reckoning sensors for pedestrian navigation has been broadly investigated and applied in the past decade. Pedestrian dead-reckoning (PDR
PROPER FILTER DESIGN PROCEDURE FOR VIBRATION SUPPRESSION USING DELAY-ERROR-ORDER CURVES
Mavroidis, Constantinos
PROPER FILTER DESIGN PROCEDURE FOR VIBRATION SUPPRESSION USING DELAY-ERROR-ORDER CURVES D. Economou of Mechanical Engineering, Mechanical Design and Control Systems Division, 9 Heroon Polytechniou Str., 15773@central.ntua.gr B Rutgers University, The State University of New Jersey, Department of Mechanical and Aerospace
Neural network predictions with error bars \\Lambda William D. Penny and Stephen J. Roberts
Roberts, Stephen
Neural network predictions with error bars \\Lambda William D. Penny and Stephen J. Roberts Neural, Technology and Medicine, London SW7 2BT., U.K. w.penny@ic.ac.uk, s.j.roberts@ic.ac.uk February 21, 1997
Detecting Concurrency Errors in Client-side JavaScript Web Applications
issues are becoming more serious for web applications because a new web standard, HTML5, allows webDetecting Concurrency Errors in Client-side JavaScript Web Applications Shin Hong, Yongbae Park.park@kaist.ac.kr, moonzoo@cs.kaist.ac.kr Abstract--As web technologies have evolved, the complexity of dynamic web
Fast Error-bounded Surfaces and Derivatives Computation for Volumetric Particle Data
Frey, Pascal
Fast Error-bounded Surfaces and Derivatives Computation for Volumetric Particle Data Chandrajit Bajaj Vinay Siddavanahalli December 6, 2005 Abstract Volumetric smooth particle data arise as atomic system. An important computation performed on the volumetric particle system is that of force
Large-Scale Errors and Mesoscale Predictability in Pacific Northwest Snowstorms DALE R. DURRAN
Large-Scale Errors and Mesoscale Predictability in Pacific Northwest Snowstorms DALE R. DURRAN The development of mesoscale numerical weather prediction (NWP) models over the last two decades has made- search communities. Nevertheless, the predictability of the mesoscale features captured in such forecasts
Gross Error Detection in Chemical Plants and Refineries for On-Line Optimization
Pike, Ralph W.
Automation - FACS DOT Products, Inc. - NOVA #12;Distributed Control System Runs control algorithmthreetimesGross Error Detection in Chemical Plants and Refineries for On-Line Optimization Xueyu Chen, Derya, Baton Rouge, LA (February 28, 2003) #12;INTRODUCTION o Status of on-line optimization o Theoretical
GBAS Differentially Corrected Positioning Service Ionospheric Anomaly Errors Evaluated in an
Stanford University
. Young Shin Park is a Ph.D. Candidate in Aeronautics and Astronautics in the Global Positioning System after application of differential corrections is small. However, during solar storms and in geomagnetic done to mitigate the potential impact of errors induced by ionospheric anomalies on the precision
Wright, Dawn Jeannine
1 Error Analysis of Bathymetric Data Derived from IKONOS Imagery Location: Tutuila Island, American) / NOAA Fisheries' Coral Reef Ecosystem Division (CRED) Analysis Overview Bathymetric data were derived analyzed to extend the spatial coverage of the final derived bathymetry product. The imagery was provided
Schlegel, N-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E
2013-01-01
perturbations in SMB upstream and downstream from gate 8of ice ?ow both upstream and downstream. [ 36 ] By mappingthat errors far upstream and downstream of a gate could
Results of performance testing the Russian RPV temperature measurement probe used for annealing
Nakos, J.T. [Sandia National Labs., Albuquerque, NM (United States); Selsky, S. [CNIITMASH, Moscow (Russian Federation)
1998-03-01
This paper provides information on three (3) topics related to temperature measurements in an annealing procedure: (1) results of a series of experiments performed by CNIITMASH of the Russian consortium MOHT on their reactor pressure vessel (RPV) temperature measurement probe, (2) a discussion regarding uncertainties and errors in RPV temperature measurements, and (3) predictions from a thermal model of a spherical RPV temperature measurement probe. MOHT teamed with MPR Associates and was to perform the Annealing Demonstration Project (ADP) on behalf of the US Department of Energy, ESEERCo, EPRI, CRIEPI, Framatome, and Consumers Power Co. at the Midland plant. Experimental results show that the CNIITMASH probe errors are a maximum of about 27 C (49 F) during a 15 C/hr (27 F/hr) heat-up but only about 3 C (5.4 F) (0.6%) during the hold portion at 470 C (878 F). These errors are much smaller than those obtained from a similar series of experiments performed by Sandia National Laboratories (Sandia). The discussion about uncertainties and errors shows that results presented as a temperature difference provides a measure of the probe error. Qualitative agreement is shown between the model predictions, the experimental results of the CNIITMASH probe and the experimental results of a series of similar experiments performed by Sandia.
Assessor Training Measurement Uncertainty
NVLAP Assessor Training Measurement Uncertainty #12;Assessor Training 2009: Measurement Uncertainty Training 2009: Measurement Uncertainty 3 Measurement Uncertainty ·Calibration and testing labs performing Training 2009: Measurement Uncertainty 4 Measurement Uncertainty ·When the nature of the test precludes
UNIVERSITY OF CALIFORNIA, SAN DIEGO Measurement of the Magnetic and Temperature
California at San Diego, University of
Rate Measurement Assumptions 5.2.2 Compressional Heating Model . . . . . . .. 5.3 Balancing Compressional Heating with Cyclotron Cooling 5.4 Experimentally Measured Relaxation Rate 5.5 Error Analysis including B2 A.5 I for Small re/b ...... A.6 Three-Body Collision Rate A.7 Joule Heating when n(r, t) = n
Reducing Biases in XBT Measurements by Including Discrete Information from Pressure Switches
Reducing Biases in XBT Measurements by Including Discrete Information from Pressure Switches MARLOS underway to improve XBT probes by including pressure switches. Information from these pressure measurements error parameters, and to optimize the use of pressure switches in terms of number of switches, optimal
An estimation algorithm for 3-D pose measurement using redundant ultrasonic sensors
Branum, Brian Howell
1998-01-01
precise precise but expensive sensing equipment to attain range measuring instruments to triangulate an accurate 3-D more sensors than are necessary for a single 3-D pose measurement. If the pose by including expected errors could be modeled with a...
Uncertainty in terahertz time-domain spectroscopy measurement
Withayachumnankul, Withawat; Fischer, Bernd M.; Lin Hungyen; Abbott, Derek
2008-06-15
Measurements of optical constants at terahertz--or T-ray--frequencies have been performed extensively using terahertz time-domain spectroscopy (THz-TDS). Spectrometers, together with physical models explaining the interaction between a sample and T-ray radiation, are progressively being developed. Nevertheless, measurement errors in the optical constants, so far, have not been systematically analyzed. This situation calls for a comprehensive analysis of measurement uncertainty in THz-TDS systems. The sources of error existing in a terahertz spectrometer and throughout the parameter estimation process are identified. The analysis herein quantifies the impact of each source on the output optical constants. The resulting analytical model is evaluated against experimental THz-TDS data.
Neradilek, Moni Blazej; Polissar, Nayak; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Rick E.; Cox, Timothy C.; Postlewait, Ed; Corley, Richard A.
2012-06-01
We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.
Lonardi, Stefano
size $n$ and assume a uniform cost model throughout this work.} for exact matching the construction, the index construction time, the lookup time, and the error model. Usually, the least important Indexing with Errors Moritz G. Maaß and Johannes Nowak {maass,nowakj}@in.tum.de Institut f¨ur Informatik
Fitzpatrick, Richard
Bifurcated states of a rotating tokamak plasma in the presence of a static error-field Richard and nonlinear response of a rotating tokamak plasma to a resonant error-field Phys. Plasmas 21, 092513 (2014); 10.1063/1.4896244 Neoclassical momentum transport in an impure rotating tokamak plasma Phys. Plasmas
Fitzpatrick, Richard
Error-field induced electromagnetic torques in a large aspect-ratio, low- , weakly shaped tokamak-ratio tokamaks Phys. Plasmas 17, 122504 (2010); 10.1063/1.3526611 A nonideal error-field response model for strongly shaped tokamak plasmas Phys. Plasmas 17, 112502 (2010); 10.1063/1.3504227 Modeling the effect
Fitzpatrick, Richard
Drift-magnetohydrodynamical model of error-field penetration in tokamak plasmas A. Cole and R induced by waves in tokamaks Phys. Plasmas 20, 102105 (2013); 10.1063/1.4823713 A nonideal error-field response model for strongly shaped tokamak plasmas Phys. Plasmas 17, 112502 (2010); 10
Wobbrock, Jacob O.
Cleanroom: Edit-Time Error Detection with the Uniqueness Heuristic Andrew J. Ko and Jacob O for HTML, CSS, and JavaScript, in an interactive editor called Cleanroom, which highlights lone identifiers after each keystroke. Through an online experiment, we show that Cleanroom detects real errors
Decay of motor memories in the absence of error Pavan A. Vaswani1 and Reza Shadmehr2
Shadmehr, Reza
1 Decay of motor memories in the absence of error Pavan A. Vaswani1 and Reza Shadmehr2 1. Department of Neuroscience, 2. Department of Biomedical Engineering Laboratory for Computational Motor@jhmi.edu Running title: Decay of motor memories Keywords: motor control, motor learning, decay, error
Poulakakis, Ioannis
Error Probabilities and Threshold Selection in Networked Nuclear Detection Chetan D. Pahlajani analytical bounds on error probabilities in the setting of networked nuclear detection based on a likelihood or radioactive) within a fixed time interval. Exploiting the particular modeling structure of remote nuclear
Whitehead, Anthony
: Testing the iDoseCheck by Jacqueline Ellis, RN, PhD, University of Ottawa, School of Nursing and Children common type of pediatric drug errors, with over-dose outnumbering under-dose errors. Weight-based calculations are essential for proper dosing but complex in pediatric settings where patient weights may vary
The Swing Equation: Power Form, PerUnit, Error 1.0 Power Form of Swing Equation
McCalley, James D.
1 The Swing Equation: Power Form, PerUnit, Error 1.0 Power Form of Swing Equation Recall from when the swing equation is written in perunit, the numerical value of the torque version) to analyze error in the power form of the swing equation. But before we do that, we need to define pu speed
shaft introduces error into models of the robot kinematics. Visual or electromagnetic tracking of the instrument tip provides correct forward kinematics, but uncertainty in shaft bending and port location leaves. Comparison with a controller assuming a straight instrument shaft quantifies motion errors resulting from
Even-Parity S_(N) Adjoint Method Including SP_(N) Model Error and Iterative Efficiency
Zhang, Yunhuang
2014-08-10
In this Dissertation, we analyze an adjoint-based approach for assessing the model error of SP_(N) equations (low fidelity model) by comparing it against S_(N) equations (high fidelity model). Three model error estimation methods, namely, direct...
Measuring the dark matter equation of state
Ana Laura Serra; Mariano Javier de León Domínguez Romero
2011-05-30
The nature of the dominant component of galaxies and clusters remains unknown. While the astrophysics community supports the cold dark matter (CDM) paradigm as a clue factor in the current cosmological model, no direct CDM detections have been performed. Faber and Visser 2006 have suggested a simple method for measuring the dark matter equation of state that combines kinematic and gravitational lensing data to test the widely adopted assumption of pressureless dark matter. Following this formalism, we have measured the dark matter equation of state for first time using improved techniques. We have found that the value of the equation of state parameter is consistent with pressureless dark matter within the errors. Nevertheless, the measured value is lower than expected because typically the masses determined with lensing are larger than those obtained through kinematic methods. We have tested our techniques using simulations and we have also analyzed possible sources of error that could invalidate or mimic our results. In the light of this result, we can now suggest that the understanding of the nature of dark matter requires a complete general relativistic analysis.
Representation of the Fourier transform as a weighted sum of the complex error functions
S. M. Abrarov; B. M. Quine
2015-08-05
In this paper we show that a methodology based on a sampling with the Gaussian function of kind $h\\,{e^{ - {{\\left( {t/c} \\right)}^2}}}/\\left( {{c}\\sqrt \\pi } \\right)$, where ${c}$ and $h$ are some constants, leads to the Fourier transform that can be represented as a weighted sum of the complex error functions. Due to remarkable property of the complex error function, the Fourier transform based on the weighted sum can be significantly simplified and expressed in terms of a damping harmonic series. In contrast to the conventional discrete Fourier transform, this methodology results in a non-periodic wavelet approximation. Consequently, the proposed approach may be useful and convenient in algorithmic implementation.
S. M. Abrarov; B. M. Quine
2015-11-03
This paper presents a new approach in application of the Fourier transform to the complex error function resulting in an efficient rational approximation. Specifically, the computational test shows that with only $17$ summation terms the obtained rational approximation of the complex error function provides accuracy ${10^{ - 15}}$ over the most domain of practical importance $0 \\le x \\le 40,000$ and ${10^{ - 4}} \\le y \\le {10^2}$ required for the HITRAN-based spectroscopic applications. Since the rational approximation does not contain trigonometric or exponential functions dependent upon the input parameters $x$ and $y$, it is rapid in computation. Such an example demonstrates that the considered methodology of the Fourier transform may be advantageous in practical applications.
Discussion on common errors in analyzing sea level accelerations, solar trends and global warming
Scafetta, Nicola
2013-01-01
Errors in applying regression models and wavelet filters used to analyze geophysical signals are discussed: (1) multidecadal natural oscillations (e.g. the quasi 60-year Atlantic Multidecadal Oscillation (AMO), North Atlantic Oscillation (NAO) and Pacific Decadal Oscillation (PDO)) need to be taken into account for properly quantifying anomalous accelerations in tide gauge records such as in New York City; (2) uncertainties and multicollinearity among climate forcing functions prevent a proper evaluation of the solar contribution to the 20th century global surface temperature warming using overloaded linear regression models during the 1900-2000 period alone; (3) when periodic wavelet filters, which require that a record is pre-processed with a reflection methodology, are improperly applied to decompose non-stationary solar and climatic time series, Gibbs boundary artifacts emerge yielding misleading physical interpretations. By correcting these errors and using optimized regression models that reduce multico...
Rasch, Kevin M.; Hu, Shuming; Mitas, Lubos [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)] [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)
2014-01-28
We elucidate the origin of large differences (two-fold or more) in the fixed-node errors between the first- vs second-row systems for single-configuration trial wave functions in quantum Monte Carlo calculations. This significant difference in the valence fixed-node biases is studied across a set of atoms, molecules, and also Si, C solid crystals. We show that the key features which affect the fixed-node errors are the differences in electron density and the degree of node nonlinearity. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems, provide new perspectives on the origins of the fixed-node biases in calculations of molecular and condensed systems, and carry implications for pseudopotential constructions for heavy elements.
Xiaofeng Wu; Guanrong Chen; Jianping Cai
2008-07-14
This paper provides a unified method for analyzing chaos synchronization of the generalized Lorenz systems. The considered synchronization scheme consists of identical master and slave generalized Lorenz systems coupled by linear state error variables. A sufficient synchronization criterion for a general linear state error feedback controller is rigorously proven by means of linearization and Lyapunov's direct methods. When a simple linear controller is used in the scheme, some easily implemented algebraic synchronization conditions are derived based on the upper and lower bounds of the master chaotic system. These criteria are further optimized to improve their sharpness. The optimized criteria are then applied to four typical generalized Lorenz systems, i.e. the classical Lorenz system, the Chen system, the Lv system and a unified chaotic system, obtaining precise corresponding synchronization conditions. The advantages of the new criteria are revealed by analytically and numerically comparing their sharpness with that of the known criteria existing in the literature.
Effect of Field Errors in Muon Collider IR Magnets on Beam Dynamics
Alexahin, Y.; Gianfelice-Wendt, E.; Kapin, V.V.; /Fermilab
2012-05-01
In order to achieve peak luminosity of a Muon Collider (MC) in the 10{sup 35} cm{sup -2}s{sup -1} range very small values of beta-function at the interaction point (IP) are necessary ({beta}* {le} 1 cm) while the distance from IP to the first quadrupole can not be made shorter than {approx}6 m as dictated by the necessity of detector protection from backgrounds. In the result the beta-function at the final focus quadrupoles can reach 100 km making beam dynamics very sensitive to all kind of errors. In the present report we consider the effects on momentum acceptance and dynamic aperture of multipole field errors in the body of IR dipoles as well as of fringe-fields in both dipoles and quadrupoles in the ase of 1.5 TeV (c.o.m.) MC. Analysis shows these effects to be strong but correctable with dedicated multipole correctors.
MULTI-MODE ERROR FIELD CORRECTION ON THE DIII-D TOKAMAK
SCOVILLE, JT; LAHAYE, RJ
2002-10-01
OAK A271 MULTI-MODE ERROR FIELD CORRECTION ON THE DIII-D TOKAMAK. Error field optimization on DIII-D tokamak plasma discharges has routinely been done for the last ten years with the use of the external ''n = 1 coil'' or the ''C-coil''. The optimum level of correction coil current is determined by the ability to avoid the locked mode instability and access previously unstable parameter space at low densities. The locked mode typically has toroidal and poloidal mode numbers n = 1 and m = 2, respectively, and it is this component that initially determined the correction coil current and phase. Realization of the importance of nearby n = 1 mode components m = 1 and m = 3 has led to a revision of the error field correction algorithm. Viscous and toroidal mode coupling effects suggested the need for additional terms in the expression for the radial ''penetration'' field B{sub pen} that can induce a locked mode. To incorporate these effects, the low density locked mode threshold database was expanded. A database of discharges at various toroidal fields, plasma currents, and safety factors was supplement4ed with data from an experiment in which the fields of the n = 1 coil and C-coil were combined, allowing the poloidal mode spectrum of the error field to be varied. A multivariate regression analysis of this new low density locked mode database was done to determine the low density locked mode threshold scaling relationship n{sub e} {proportional_to} B{sub T}{sup -0.01} q{sub 95}{sup -0.79} B{sub pen} and the coefficients of the poloidal mode components in the expression for B{sub pen}. Improved plasma performance is achieved by optimizing B{sub pen} by varying the applied correction coil currents.
Zollanvari, Amin
2012-02-14
, Aniruddha Datta Guy L. Curry Head of Department, Costas N. Georghiades December 2010 Major Subject: Electrical Engineering iii ABSTRACT Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133 x LIST OF TABLES TABLE Page I Minimum sample size, n, (n0 = n1 = n) for desired (n;0:5) in univariate case. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 II Genes selected using the validity-goodness model selection...
Zhang, J.; Hodge, B. M.; Florita, A.
2013-05-01
Wind and solar power generations differ from conventional energy generation because of the variable and uncertain nature of their power output. This variability and uncertainty can have significant impacts on grid operations. Thus, short-term forecasting of wind and solar generation is uniquely helpful for power system operations to balance supply and demand in an electricity system. This paper investigates the correlation between wind and solar power forecasting errors.
while ll 6= ERROR do tcl tuple of ctcl corresponding to ll.tid cr
Samet, Hanan
corresponding to cdb.tid cr lca #12;rst tuple t of li cl such that t.class >= ca.name caf while lca 6= ERROR and lca.class = ca.name do lia tuple of logical images corresponding to lca.tid cr lcb #12;rst tuple next tuple of li cl in alphabetic order can lca next tuple of li cl in alphabetic order can Plan P4P
Tradeoff between energy and error in the discrimination of quantum-optical devices
Alessandro Bisio; Michele Dall'Arno; Giacomo Mauro D'Ariano
2011-07-11
We address the problem of energy-error tradeoff in the discrimination between two linear passive quantum optical devices with a single use. We provide an analytical derivation of the optimal strategy for beamsplitters and an iterative algorithm converging to the optimum in the general case. We then compare the optimal strategy with a simpler strategy using coherent input states and homodyne detection. It turns out that the former requires much less energy in order to achieve the same performances.
Numerical errors in the presence of steep topography: analysis and alternatives
Lundquist, K A; Chow, F K; Lundquist, J K
2010-04-15
It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and those domains where the solution would benefit from the use of the immersed boundary method.
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
Optimized structure and vibrational properties by error affected potential energy surfaces
Andrea Zen; Delyan Zhelyazov; Leonardo Guidoni
2013-06-18
The precise theoretical determination of the geometrical parameters of molecules at the minima of their potential energy surface and of the corresponding vibrational properties are of fundamental importance for the interpretation of vibrational spectroscopy experiments. Quantum Monte Carlo techniques are correlated electronic structure methods promising for large molecules, which are intrinsically affected by stochastic errors on both energy and force calculations, making the mentioned calculations more challenging with respect to other more traditional quantum chemistry tools. To circumvent this drawback in the present work we formulate the general problem of evaluating the molecular equilibrium structures, the harmonic frequencies and the anharmonic coefficients of an error affected potential energy surface. The proposed approach, based on a multidimensional fitting procedure, is illustrated together with a critical evaluation of systematic and statistical errors. We observe that the use of forces instead of energies in the fitting procedure reduces the the statistical uncertainty of the vibrational parameters by one order of magnitude. Preliminary results based on Variational Monte Carlo calculations on the water molecule demonstrate the possibility to evaluate geometrical parameters, harmonic and anharmonic coefficients at this level of theory with an affordable computational cost and a small stochastic uncertainty (<0.07% for geometries and <0.7% for vibrational properties).
Horace Yuen
2014-11-10
Privacy amplification is a necessary step in all quantum key distribution protocols, and error correction is needed in each except when signals of many photons are used in the key communication in quantum noise approach. No security analysis of error correcting code information leak to the attacker has ever been provided, while an ad hoc formula is currently employed to account for such leak in the key generation rate. It is also commonly believed that privacy amplification allows the users to at least establish a short key of arbitrarily close to perfect security. In this paper we show how the lack of rigorous error correction analysis makes the otherwise valid privacy amplification results invalid, and that there exists a limit on how close to perfect a generated key can be obtained from privacy amplification. In addition, there is a necessary tradeoff between key rate and security, and the best theoretical values from current theories would not generate enough near-uniform key bits to cover the message authentication key cost in disturbance-information tradeoff protocols of the BB84 variety.
On the implementation of error handling in dynamic interfaces to scientific codes
Solomon, C.J.
1993-11-01
With the advent of powerful workstations with windowing systems, the scientific community has become interested in user friendly interfaces as a means of promoting the distribution of scientific codes to colleagues. Distributing scientific codes to a wider audience can, however, be problematic because scientists, who are familiar with the problem being addressed but not aware of necessary operational details, are encouraged to use the codes. A more friendly environment that not only guides user inputs, but also helps catch errors is needed. This thesis presents a dynamic graphical user interface (GUI) creation system with user controlled support for error detection and handling. The system checks a series of constraints defining a valid input set whenever the state of the system changes and notifies the user when an error has occurred. A naive checking scheme was implemented that checks every constraint every time the system changes. However, this method examines many constraints whose values have not changed. Therefore, a minimum evaluation scheme that only checks those constraints that may have been violated was implemented. This system was implemented in a prototype and user testing was used to determine if it was a success. Users examined both the GUI creation system and the end-user environment. The users found both to be easy to use and efficient enough for practical use. Moreover, they concluded that the system would promote distribution.
A method for the quantification of model form error associated with physical systems.
Wallen, Samuel P.; Brake, Matthew Robert
2014-03-01
In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.
Method used to estimate screening-level Total Failure Probability for human error events
Burns, R.S.; Turner, J.H. [Oak Ridge National Lab., TN (United States). Engineering Technology Div.
1994-12-31
This document briefly describes the method used to estimate a screening value for the Total Failure Probability (F{sub T}) of human error events that are identified in the fault trees which describe potential liquid UF{sub 6} release accidents at two US Gaseous Diffusion Plants. A discussion is provided of the assumptions, limitations, and overall logic of the F{sub T} assignment method, and a description is presented of how the method is employed. The description herein presents the screening technique used to quantify human errors in the accident analysis portion of the Gaseous Diffusion Plant Safety Analysis Report Upgrade Program. Specifically, the basic events analyzed here are given in the fault trees for one facility at the Paducah Gaseous Diffusion Plant (PGDP) and one at the Portsmouth Gaseous Diffusion Plant (PORTS). These plants are primarily chemical processing facilities that deal with a slightly radioactive process gas, low-enriched uranium hexafluoride (UF{sub 6}). A Human Reliability Analysis (HRA) was not accomplished while drawing the fault trees; the accomplishment of an HRA would be determined by the overall study results. The method described herein provides a framework within which a conservative estimate of human error probability can be made at the screening level for use in the event trees and fault trees.
Measuring solar reflectance Part II: Review of practical methods
Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul
2010-05-14
A companion article explored how solar reflectance varies with surface orientation and solar position, and found that clear sky air mass 1 global horizontal (AM1GH) solar reflectance is a preferred quantity for estimating solar heat gain. In this study we show that AM1GH solar reflectance R{sub g,0} can be accurately measured with a pyranometer, a solar spectrophotometer, or an updated edition of the Solar Spectrum Reflectometer (version 6). Of primary concern are errors that result from variations in the spectral and angular distributions of incident sunlight. Neglecting shadow, background and instrument errors, the conventional pyranometer technique can measure R{sub g,0} to within 0.01 for surface slopes up to 5:12 [23{sup o}], and to within 0.02 for surface slopes up to 12:12 [45{sup o}]. An alternative pyranometer method minimizes shadow errors and can be used to measure R{sub g,0} of a surface as small as 1 m in diameter. The accuracy with which it can measure R{sub g,0} is otherwise comparable to that of the conventional pyranometer technique. A solar spectrophotometer can be used to determine R*{sub g,0}, a solar reflectance computed by averaging solar spectral reflectance weighted with AM1GH solar spectral irradiance. Neglecting instrument errors, R*{sub g,0} matches R{sub g,0} to within 0.006. The air mass 1.5 solar reflectance measured with version 5 of the Solar Spectrum Reflectometer can differ from R*{sub g,0} by as much as 0.08, but the AM1GH output of version 6 of this instrument matches R*{sub g,0} to within about 0.01.
LIDAR Wind Speed Measurements of Evolving Wind Fields
Simley, E.; Pao, L. Y.
2012-07-01
Light Detection and Ranging (LIDAR) systems are able to measure the speed of incoming wind before it interacts with a wind turbine rotor. These preview wind measurements can be used in feedforward control systems designed to reduce turbine loads. However, the degree to which such preview-based control techniques can reduce loads by reacting to turbulence depends on how accurately the incoming wind field can be measured. Past studies have assumed Taylor's frozen turbulence hypothesis, which implies that turbulence remains unchanged as it advects downwind at the mean wind speed. With Taylor's hypothesis applied, the only source of wind speed measurement error is distortion caused by the LIDAR. This study introduces wind evolution, characterized by the longitudinal coherence of the wind, to LIDAR measurement simulations to create a more realistic measurement model. A simple model of wind evolution is applied to a frozen wind field used in previous studies to investigate the effects of varying the intensity of wind evolution. LIDAR measurements are also evaluated with a large eddy simulation of a stable boundary layer provided by the National Center for Atmospheric Research. Simulation results show the combined effects of LIDAR errors and wind evolution for realistic turbine-mounted LIDAR measurement scenarios.
Quantum measurements of atoms using cavity QED
Dada, Adetunmise C.; Andersson, Erika [SUPA, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Jones, Martin L.; Kendon, Vivien M. [School of Physics and Astronomy, University of Leeds, Woodhouse Lane, Leeds LS2 9JT (United Kingdom); Everitt, Mark S. [School of Physics and Astronomy, University of Leeds, Woodhouse Lane, Leeds LS2 9JT (United Kingdom); National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda ku, Tokyo 101-8430 (Japan)
2011-04-15
Generalized quantum measurements are an important extension of projective or von Neumann measurements in that they can be used to describe any measurement that can be implemented on a quantum system. We describe how to realize two nonstandard quantum measurements using cavity QED. The first measurement optimally and unambiguously distinguishes between two nonorthogonal quantum states. The second example is a measurement that demonstrates superadditive quantum coding gain. The experimental tools used are single-atom unitary operations effected by Ramsey pulses and two-atom Tavis-Cummings interactions. We show how the superadditive quantum coding gain is affected by errors in the field-ionization detection of atoms and that even with rather high levels of experimental imperfections, a reasonable amount of superadditivity can still be seen. To date, these types of measurements have been realized only on photons. It would be of great interest to have realizations using other physical systems. This is for fundamental reasons but also since quantum coding gain in general increases with code word length, and a realization using atoms could be more easily scaled than existing realizations using photons.
Nash,B.; Guo, W.
2009-05-04
Successful operation of NSLS-II requires sufficient dynamic aperture for injection, as well as momentum aperture for Touschek lifetime. We explore the dependence of momentum and dynamic aperture on higher-order multipole field errors in the quadrupoles and sextupoles. We add random and systematic multipole errors to the quadrupoles and sextupoles and compute the effect on dynamic aperture. We find that the strongest effect is at negative momentum, due to larger closed orbit excursions. Adding all the errors based on the NSLS-II specifications, we find adequate dynamic and momentum aperture.
Measurement uncertainty of adsorption testing of desiccant materials
Bingham, C E; Pesaran, A A
1988-12-01
The technique of measurement uncertainty analysis as described in the current ANSI/ASME standard is applied to the testing of desiccant materials in SERI`s Sorption Test Facility. This paper estimates the elemental precision and systematic errors in these tests and propagates them separately to obtain the resulting uncertainty of the test parameters, including relative humidity ({plus_minus}.03) and sorption capacity ({plus_minus}.002 g/g). Errors generated by instrument calibration, data acquisition, and data reduction are considered. Measurement parameters that would improve the uncertainty of the results are identified. Using the uncertainty in the moisture capacity of a desiccant, the design engineer can estimate the uncertainty in performance of a dehumidifier for desiccant cooling systems with confidence. 6 refs., 2 figs., 8 tabs.
INVARIANT RADON MEASURES ON MEASURED LAMINATION SPACE
Hamenstädt, Ursula
INVARIANT RADON MEASURES ON MEASURED LAMINATION SPACE URSULA HAMENST¨ADT Abstract. Let S be an oriented surface of genus g 0 with m 0 punctures and 3g - 3 + m 2. We classify all Radon measures class group MCG(S) naturally acts on ML as a group of homeomorphisms preserving a Radon measure
Method and apparatus for measuring lung density by Compton backscattering
Loo, Billy W. (Oakland, CA); Goulding, Frederick S. (Lafayette, CA)
1991-01-01
The density of the lung of a patient suffering from pulmonary edema is monitored by irradiating the lung by a single collimated beam of monochromatic photons and measuring the energies of photons Compton backscattered from the lung by a single high-resolution, high-purity germanium detector. A compact system geometry and a unique data extraction scheme are utilized to monimize systematic errors due to the presence of the chestwall and multiple scattering.
Method and apparatus for measuring lung density by Compton backscattering
Loo, B.W.; Goulding, F.S.
1988-03-11
The density of the lung of a patient suffering from pulmonary edema is monitored by irradiating the lung by a single collimated beam of monochromatic photons and measuring the energies of photons compton back-scattered from the lung by a single high-resolution, high-purity germanium detector. A compact system geometry and a unique data extraction scheme are utilized to minimize systematic errors due to the presence of the chestwall and multiple scattering. 11 figs., 1 tab.
Absorber Alignment Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
As we pursue efforts to lower the capital and installation costs of parabolic trough solar collectors, it is essential to maintain high optical performance. While there are many optical tools available to measure the reflector slope errors of parabolic trough solar collectors, there are few tools to measure the absorber alignment. A new method is presented here to measure the absorber alignment in two dimensions to within 0.5 cm. The absorber alignment is measured using a digital camera and four photogrammetric targets. Physical contact with the receiver absorber or glass is not necessary. The alignment of the absorber is measured along its full length so that sagging of the absorber can be quantified with this technique. The resulting absorber alignment measurement provides critical information required to accurately determine the intercept factor of a collector.
Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp
2013-07-01
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging, but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.
Lee, Sang Hyun, 1973-
2006-01-01
Construction projects are uncertain and complex in nature. One of the major driving forces that may account for these characteristics is iterative cycles caused by errors and changes. Errors and changes worsen project ...
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)
2014-06-01
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the use of probabilistic models into clinical chart checking for improved detection of potential errors in RT plans.
SU-E-T-170: Evaluation of Rotational Errors in Proton Therapy Planning of Lung Cancer
Rana, S; Zhao, L; Ramirez, E; Singh, H; Zheng, Y
2014-06-01
Purpose: To investigate the impact of rotational (roll, yaw, and pitch) errors in proton therapy planning of lung cancer. Methods: A lung cancer case treated at our center was used in this retrospective study. The original plan was generated using two proton fields (posterior-anterior and left-lateral) with XiO treatment planning system (TPS) and delivered using uniform scanning proton therapy system. First, the computed tomography (CT) set of original lung treatment plan was re-sampled for rotational (roll, yaw, and pitch) angles ranged from ?5° to +5°, with an increment of 2.5°. Second, 12 new proton plans were generated in XiO using the 12 re-sampled CT datasets. The same beam conditions, isocenter, and devices were used in new treatment plans as in the original plan. All 12 new proton plans were compared with original plan for planning target volume (PTV) coverage and maximum dose to spinal cord (cord Dmax). Results: PTV coverage was reduced in all 12 new proton plans when compared to that of original plan. Specifically, PTV coverage was reduced by 0.03% to 1.22% for roll, by 0.05% to 1.14% for yaw, and by 0.10% to 3.22% for pitch errors. In comparison to original plan, the cord Dmax in new proton plans was reduced by 8.21% to 25.81% for +2.5° to +5° pitch, by 5.28% to 20.71% for +2.5° to +5° yaw, and by 5.28% to 14.47% for ?2.5° to ?5° roll. In contrast, cord Dmax was increased by 3.80% to 3.86% for ?2.5° to ?5° pitch, by 0.63% to 3.25% for ?2.5° to ?5° yaw, and by 3.75% to 4.54% for +2.5° to +5° roll. Conclusion: PTV coverage was reduced by up to 3.22% for rotational error of 5°. The cord Dmax could increase or decrease depending on the direction of rotational error, beam angles, and the location of lung tumor.
Estimating market power in homogeneous product markets using a composed error model
Orea, Luis; Steinbuks, Jevgenijs
2012-04-25
for assisting us with computation of residual demand elasticities based on PX bidding data. We also thank David Newbery, Jacob LaRiviere, Mar Reguant, the anonymous reviewer, and the participants of the 3rd International Workshop on Empirical Methods in Energy... that variation in the error term is an exponential function of an intercept term, the day-ahead forecast of total demand and its square (i.e., FQ, FQ2), that are included in the model in order to capture possible demand-size effects, and a vector of days...