Systematic Errors in measurement of b1
Wood, S A
2014-10-27T23:59:59.000Z
A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b(1) structure function at Jefferson Lab.
Quantum root-mean-square error and measurement uncertainty relations
Paul Busch; Pekka Lahti; Reinhard F Werner
2014-10-10T23:59:59.000Z
Recent years have witnessed a controversy over Heisenberg's famous error-disturbance relation. Here we resolve the conflict by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. We discuss two approaches to adapting the classic notion of root-mean-square error to quantum measurements. One is based on the concept of noise operator; its natural operational content is that of a mean deviation of the values of two observables measured jointly, and thus its applicability is limited to cases where such joint measurements are available. The second error measure quantifies the differences between two probability distributions obtained in separate runs of measurements and is of unrestricted applicability. We show that there are no nontrivial unconditional joint-measurement bounds for {\\em state-dependent} errors in the conceptual framework discussed here, while Heisenberg-type measurement uncertainty relations for {\\em state-independent} errors have been proven.
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19T23:59:59.000Z
to the models considered in Tsiatis and Ma (2004), our model is less stringent because it allows an unspecified model error distribution and unspecified covariate distribution, not just the latter. With an unspecified model error distribution, the RMM... with measurement error is a very different problem compared to the model considered in Tsiatis and Ma (2004), where the model error distribution has a known parametric form. Consequently, the semiparamet- ric treatment here is also drastically different. Our...
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01T23:59:59.000Z
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS
Heinemann, Detlev
commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head
Robust mixtures in the presence of measurement errors
Jianyong Sun; Ata Kaban; Somak Raychaudhury
2007-09-06T23:59:59.000Z
We develop a mixture-based approach to robust density modeling and outlier detection for experimental multivariate data that includes measurement error information. Our model is designed to infer atypical measurements that are not due to errors, aiming to retrieve potentially interesting peculiar objects. Since exact inference is not possible in this model, we develop a tree-structured variational EM solution. This compares favorably against a fully factorial approximation scheme, approaching the accuracy of a Markov-Chain-EM, while maintaining computational simplicity. We demonstrate the benefits of including measurement errors in the model, in terms of improved outlier detection rates in varying measurement uncertainty conditions. We then use this approach in detecting peculiar quasars from an astrophysical survey, given photometric measurements with errors.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M [ORNL] [ORNL; Walker, William C [ORNL] [ORNL
2014-01-01T23:59:59.000Z
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Model Error Correction for Linear Methods in PET Neuroreceptor Measurements
Renaut, Rosemary
Model Error Correction for Linear Methods in PET Neuroreceptor Measurements Hongbin Guo address: hguo1@asu.edu (Hongbin Guo) Preprint submitted to NeuroImage December 11, 2008 #12;reached. A new
Universally Valid Error-Disturbance Relations in Continuous Measurements
Atsushi Nishizawa; Yanbei Chen
2015-05-31T23:59:59.000Z
In quantum physics, measurement error and disturbance were first naively thought to be simply constrained by the Heisenberg uncertainty relation. Later, more rigorous analysis showed that the error and disturbance satisfy more subtle inequalities. Several versions of universally valid error-disturbance relations (EDR) have already been obtained and experimentally verified in the regimes where naive applications of the Heisenberg uncertainty relation failed. However, these EDRs were formulated for discrete measurements. In this paper, we consider continuous measurement processes and obtain new EDR inequalities in the Fourier space: in terms of the power spectra of the system and probe variables. By applying our EDRs to a linear optomechanical system, we confirm that a tradeoff relation between error and disturbance leads to the existence of an optimal strength of the disturbance in a joint measurement. Interestingly, even with this optimal case, the inequality of the new EDR is not saturated because of doublely existing standard quantum limits in the inequality.
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering
Kreinovich, Vladik
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work
Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences
Dominici, Francesca
1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure of air pollution and health. Because measurement error may have substantial implications for interpreting
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors
Sarkar, Abhra
2014-06-24T23:59:59.000Z
BAYESIAN SEMIPARAMETRIC DENSITY DECONVOLUTION AND REGRESSION IN THE PRESENCE OF MEASUREMENT ERRORS A Dissertation by ABHRA SARKAR Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment... Copyright 2014 Abhra Sarkar ABSTRACT Although the literature on measurement error problems is quite extensive, so- lutions to even the most fundamental measurement error problems like density de- convolution and regression with errors...
A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang
Shen, Kai
A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor- tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap
A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang
Shen, Kai
A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap
The Invariance of Score Tests to Measurement Error By CHI-LUN CHENG
Huang, Su-Yun
for a Box-Cox power transformation. Under speci c constraints, we show that the score tests for measurement these estab- lished results when the true model is subject to measurement errors. It is known that ignoring variable xi is the true value i plus some random measurement error i: xi = i + i (i = 1 n) (1
Efficient Small Area Estimation in the Presence of Measurement Error in Covariates
Singh, Trijya
2012-10-19T23:59:59.000Z
for the four estimators, yi, eYiS, bYiME, bYiSIMEX when the number of small areas is 100, measure- ment error variance Ci = 3 and 2v = 4. k is the percentage of areas having auxiliary information measured with error. : : : : : : : 52 2 Absolute value... 3 Jackknife estimates of the mean squared error of the Lohr-Ybarra estimator bYiME and the SIMEX estimator bYiSIMEX when the num- ber of small areas is 100, measurement error variance Ci = 2 and 2v = 4. k is the percentage of areas having...
Measurement Errors in Visual Servoing V. Kyrki ,1
Kragic, Danica
feedback for closed loop control of a robot motion termed visual servoing has received a significant amount robot trajectory and its uncertainty. The procedures of camera calibration have improved enormously over on the modeling of an error function and thus has a major effect on the robot's trajectory. On the other hand
Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors
Paris-Sud XI, Université de
Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors Fabienne Comte, Tabea of nonparametric density estimation in the pile-up model. Adaptive nonparametric estimators are proposed for the pile-up model in its simple form as well as in the case of additional measurement errors. Furthermore
Topics in measurement error and missing data problems
Liu, Lian
2009-05-15T23:59:59.000Z
reasons. In this research, the impact of missing genotypes is investigated for high resolution combined linkage and association mapping of quantitative trait loci (QTL). We assume that the genotype data are missing completely at random (MCAR). Two... and asymptotic properties. In the genetics study, a new method is proposed to account for the missing genotype in a combined linkage and association study. We have concluded that this method does not improve power but it will provide better type I error rates...
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2008-06-24T23:59:59.000Z
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Mints, M.Ya.; Chinkov, V.N.
1995-09-01T23:59:59.000Z
Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.
Reducing the influence of microphone errors on in-situ ground impedance measurements
Vormann, Matthias
Reducing the influence of microphone errors on in- situ ground impedance measurements Roland Kruse. Keywords: Ground impedance; In-situ impedance measurement PACS 43.58.Bh #12;Introduction The acoustical. This problem is not specific to in-situ measurements but also applies to impedance tube measurements [9]. Two
Formalism for Simulation-based Optimization of Measurement Errors in High Energy Physics
Yuehong Xie
2009-04-29T23:59:59.000Z
Miminizing errors of the physical parameters of interest should be the ultimate goal of any event selection optimization in high energy physics data analysis involving parameter determination. Quick and reliable error estimation is a crucial ingredient for realizing this goal. In this paper we derive a formalism for direct evaluation of measurement errors using the signal probability density function and large fully simulated signal and background samples without need for data fitting and background modelling. We illustrate the elegance of the formalism in the case of event selection optimization for CP violation measurement in B decays. The implication of this formalism on choosing event variables for data analysis is discussed.
Effect and minimization of errors in in-situ ground impedance measurements
Vormann, Matthias
Effect and minimization of errors in in-situ ground impedance measurements Roland Kruse, Volker method is a procedure to measure the surface impedance of grounds in-situ. In this article, the influence. #12;Keywords: Ground impedance; In-situ impedance measurement PACS 43.58.Bh Introduction The surface
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)
2012-07-10T23:59:59.000Z
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
Detecting bit-flip errors in a logical qubit using stabilizer measurements
D. Ristè; S. Poletto; M. -Z. Huang; A. Bruno; V. Vesterinen; O. -P. Saira; L. DiCarlo
2014-11-20T23:59:59.000Z
Quantum data is susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction (QEC) to actively protect against both. In the smallest QEC codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Experimental demonstrations of QEC to date, using nuclear magnetic resonance, trapped ions, photons, superconducting qubits, and NV centers in diamond, have circumvented stabilizers at the cost of decoding at the end of a QEC cycle. This decoding leaves the quantum information vulnerable to physical qubit errors until re-encoding, violating a basic requirement for fault tolerance. Using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. We construct these stabilizers as parallelized indirect measurements using ancillary qubits, and evidence their non-demolition character by generating three-qubit entanglement from superposition states. We demonstrate stabilizer-based quantum error detection (QED) by subjecting a logical qubit to coherent and incoherent bit-flip errors on its constituent physical qubits. While increased physical qubit coherence times and shorter QED blocks are required to actively safeguard quantum information, this demonstration is a critical step toward larger codes based on multiple parity measurements.
A review of the theory of Coriolis flowmeter measurement errors due to entrained particles
Basse, Nils Plesner
is provided in Table 1. The measurement errors due to compressibility increase with decreasing speed of sound,12]. Nomenclature: a fluid is either a liquid or a gas. A particle can be either a solid or a fluid (gas bubble or liquid droplet). To date, the published bubble theory has dealt with zero particle density combined
JPEG Quality Transcoding using Neural Networks Trained with a Perceptual Error Measure
Lazzaro, John
JPEG Quality Transcoding using Neural Networks Trained with a Perceptual Error Measure John Lazzaro@cs.berkeley.edu Abstract A JPEG Quality Transcoder (JQT) converts a JPEG image file that was encoded with low image quality users direct control over the compression process, supporting trade- offs between image quality
Tullos, Desiree
DOWNSTREAM CHANNEL CHANGES AFTER A SMALL DAM REMOVAL: USING AERIAL PHOTOS AND MEASUREMENT ERROR to assess downstream channel changes associated with a small dam removal. The Brownsville Dam, a 2.1 m tall downstream from the dam and in an upstream control reach using aerial photos (19942008) and in the field
A multi-site analysis of random error2 in tower-based measurements of carbon and energy fluxes3
1 A multi-site analysis of random error2 in tower-based measurements of carbon and energy fluxes3 4 Forest Service, 271 Mast Road, Durham, NH 03824 USA.25 #12;RANDOM ERRORS IN ENERGY AND CO2 FLUX MEASUREMENTS Richardson et al. 1 January 13, 2006 Abstract1 Measured surface-atmosphere fluxes of energy
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01T23:59:59.000Z
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Detecting arbitrary quantum errors via stabilizer measurements on a sublattice of the surface code
A. D. Córcoles; Easwar Magesan; Srikanth J. Srinivasan; Andrew W. Cross; M. Steffen; Jay M. Gambetta; Jerry M. Chow
2014-10-23T23:59:59.000Z
To build a fault-tolerant quantum computer, it is necessary to implement a quantum error correcting code. Such codes rely on the ability to extract information about the quantum error syndrome while not destroying the quantum information encoded in the system. Stabilizer codes are attractive solutions to this problem, as they are analogous to classical linear codes, have simple and easily computed encoding networks, and allow efficient syndrome extraction. In these codes, syndrome extraction is performed via multi-qubit stabilizer measurements, which are bit and phase parity checks up to local operations. Previously, stabilizer codes have been realized in nuclei, trapped-ions, and superconducting qubits. However these implementations lack the ability to perform fault-tolerant syndrome extraction which continues to be a challenge for all physical quantum computing systems. Here we experimentally demonstrate a key step towards this problem by using a two-by-two lattice of superconducting qubits to perform syndrome extraction and arbitrary error detection via simultaneous quantum non-demolition stabilizer measurements. This lattice represents a primitive tile for the surface code, which is a promising stabilizer code for scalable quantum computing. Furthermore, we successfully show the preservation of an entangled state in the presence of an arbitrary applied error through high-fidelity syndrome measurement. Our results bolster the promise of employing lattices of superconducting qubits for larger-scale fault-tolerant quantum computing.
Optics measurement algorithms and error analysis for the proton energy frontier
Langner, A
2015-01-01T23:59:59.000Z
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) ?-functions (?). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2010-07-20T23:59:59.000Z
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others
2013-09-01T23:59:59.000Z
We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
Ali, Zulfiqar
2013-01-01T23:59:59.000Z
synchrotron facilities, such as the Nanometer Optical Component Measuring Machines (NOM) at Helmholtz Zentrum Berlin (HZB)/BESSY-
Joseph M. Renes; Volkher B. Scholz
2014-02-26T23:59:59.000Z
We derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems. The relations are formulated in terms of a directly operational quantity, namely the probability of distinguishing the actual operation of a device from its hypothetical ideal, by any possible testing procedure whatsoever. Moreover, they may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. Though intuitively apparent from Heisenberg's original arguments, only more limited versions of this statement have previously been formalized.
Lobach, Iryna
2009-05-15T23:59:59.000Z
environmental variables we used additive model of the form W = X + U, where U is generated from the Normal distribution with zero mean and variance = 0:25. Given the following haplotype frequencies (h1;h2;h3;h4;h5;h6) = (0:25;0:15;0:25;0:1;0:1;0:15) we... . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Gene-Environment Interactions . . . . . . . . . . . . . . . . . 1 1.3 Prospective Analysis of Case-Control Studies . . . . . . . . . . 2 1.4 Measurement Error in Epidemiologic Studies . . . . . . . . . . 3 1.5 Haplotype-Based studies...
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes 2006 Abstract Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE of which include ``tall tower'' instrumentation), one grassland site, and one agricultural site, to conduct
Davidson, R. L.; Earle, G. D.; Heelis, R. A. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Road, WT15, Richardson, Texas 75080 (United States); Klenzing, J. H. [Space Weather Laboratory/Code 674, Goddard Space Flight Center, Greenbelt, Maryland 20771 (United States)
2010-08-15T23:59:59.000Z
Planar retarding potential analyzers (RPAs) have been utilized numerous times on high profile missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellite Program to measure plasma composition, temperature, density, and the velocity component perpendicular to the plane of the instrument aperture. These instruments use biased grids to approximate ideal biased planes. These grids introduce perturbations in the electric potential distribution inside the instrument and when unaccounted for cause errors in the measured plasma parameters. Traditionally, the grids utilized in RPAs have been made of fine wires woven into a mesh. Previous studies on the errors caused by grids in RPAs have approximated woven grids with a truly flat grid. Using a commercial ion optics software package, errors in inferred parameters caused by both woven and flat grids are examined. A flat grid geometry shows the smallest temperature and density errors, while the double thick flat grid displays minimal errors for velocities over the temperature and velocity range used. Wire thickness along the dominant flow direction is found to be a critical design parameter in regard to errors in all three inferred plasma parameters. The results shown for each case provide valuable design guidelines for future RPA development.
Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra
J. B. Whitmore; M. T. Murphy
2014-11-18T23:59:59.000Z
We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10 m s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100 \\AA) and entire spectrographs arms ($\\sim$1000--3000 \\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200 m s$^{-1}$ per 1000 \\AA. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, $\\alpha$. The spurious deviations in $\\alpha$ produced by the model closely match important aspects of the VLT--UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in $\\alpha$ from quasar absorption lines.
ERROR MODELS FOR LIGHT SENSORS BY STATISTICAL ANALYSIS OF RAW SENSOR MEASUREMENTS
Potkonjak, Miodrag
silicon solar cell that converts light impulses directly into electrical charges that can easily-based systems including calibration, sensor fusion and power management. We developed a system of statistical the standard procedure is to use error models to enable calibration, in a variant of our approach, we use
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2010-08-17T23:59:59.000Z
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Automated suppression of errors in LTP-II slope measurements with x-ray optics
Ali, Zulfiqar
2012-01-01T23:59:59.000Z
synchrotron facilities, such as the Nanometer Optical Component Measuring Machines (NOM) at Helmholtz Zentrum Berlin (HZB)/BESSY-
Long, David G.
of ocean wind speed and direction. Scatterometers must be calibrated before their measurements are scienti in the long-term, thus propelling them out of the crowded valley of the common and up the inclines
Automated suppression of errors in LTP-II slope measurements with x-ray optics
Ali, Zulfiqar
2011-01-01T23:59:59.000Z
slope measurements with x-ray optics Zulfiqar Ali, Curtis L.with state-of-the-art x-ray optics. Significant suppressionscanning, metrology of x-ray optics, deflectometry Abstract
Measurement Error in Progress Monitoring Data: Comparing Methods Necessary for High-Stakes Decisions
Bruhl, Susan
2012-07-16T23:59:59.000Z
-stakes decisions. The study was conducted using extant data from 92 "low performing" third graders who were progress monitored using mathematics concept and application measures. The results for the participants in this study identified 1) the number of weeks...
Detwiler, Russell
Scott E. Pringle and Robert J. Glass Flow Visualization and Processes Laboratory, Sandia National Laboratories, Albuquerque, New Mexico Abstract. Understanding of single-phase and multiphase flow and transport light transmission techniques yield quantitative measurements of aperture, solute concentration
NOx Measurement Errors in Ammonia-Containing Exhaust | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn'tOrigin of Contamination in Many Devils Wash, Shiprock,Department ofNO2Marsh#47120)Measurement
Aravind Natarajan; Andrew R. Zentner; Nicholas Battaglia; Hy Trac
2014-09-04T23:59:59.000Z
We examine the importance of baryonic feedback effects on the matter power spectrum on small scales, and the implications for the precise measurement of neutrino masses through gravitational weak lensing. Planned large galaxy surveys such as the Large Synoptic Sky Telescope (LSST) and Euclid are expected to measure the sum of neutrino masses to extremely high precision, sufficient to detect non-zero neutrino masses even in the minimal mass normal hierarchy. We show that weak lensing of galaxies while being a very good probe of neutrino masses, is extremely sensitive to baryonic feedback processes. We use publicly available results from the Overwhelmingly Large Simulations (OWLS) project to investigate the effects of active galactic nuclei feedback, the nature of the stellar initial mass function, and gas cooling rates, on the measured weak lensing shear power spectrum. Using the Fisher matrix formalism and priors from CMB+BAO data, we show that when one does not account for feedback, the measured neutrino mass may be substantially larger or smaller than the true mass, depending on the dominant feedback mechanism, with the mass error |\\Delta m_nu| often exceeding the mass m_nu itself. We also consider gravitational lensing of the cosmic microwave background (CMB) and show that it is not sensitive to baryonic feedback on scales l < 2000, although CMB experiments that aim for sensitivities sigma(m_nu) < 0.02 eV will need to include baryonic effects in modeling the CMB lensing potential. A combination of CMB lensing and galaxy lensing can help break the degeneracy between neutrino masses and baryonic feedback processes. We conclude that future large galaxy lensing surveys such as LSST and Euclid can only measure neutrino masses accurately if the matter power spectrum can be measured to similar accuracy.
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01T23:59:59.000Z
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Uncertainty quantification and error analysis
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01T23:59:59.000Z
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A., E-mail: jacksona@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States)
2013-12-15T23:59:59.000Z
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or ?0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.
Ewers, Brent E.
- tively. Sap flux measured in stems did not lag JS measured in branches, and time and frequency domain. Introduction Stomata respond to environmental variation, regulate water loss and carbon dioxide gain, and thus biosphereatmosphere exchange of mass and energy. From porometry measure- ments, leaf conductance (gS) can
Wei, Shuangqing
for Average Power Measurements in Wireless Communication Systems Shuangqing Wei, Student Member, IEEE, and Dennis L. Goeckel, Member, IEEE Abstract--The measurement of the average received power is essential for power control and dynamic channel allocation in wireless communication systems. However, due
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14T23:59:59.000Z
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Finding beam focus errors automatically
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.
1987-01-01T23:59:59.000Z
An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors. (LEW)
Remarks on statistical errors in equivalent widths
Klaus Vollmann; Thomas Eversberg
2006-07-03T23:59:59.000Z
Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.
Estimating IMU heading error from SAR images.
Doerry, Armin Walter
2009-03-01T23:59:59.000Z
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01T23:59:59.000Z
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Olson, Eric J.
2013-06-11T23:59:59.000Z
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Clustered Error Correction of Codeword-Stabilized Quantum Codes
Yunfan Li; Ilya Dumer; Leonid P. Pryadko
2010-03-08T23:59:59.000Z
Codeword stabilized (CWS) codes are a general class of quantum codes that includes stabilizer codes and many families of non-additive codes with good parameters. For such a non-additive code correcting all t-qubit errors, we propose an algorithm that employs a single measurement to test all errors located on a given set of t qubits. Compared with exhaustive error screening, this reduces the total number of measurements required for error recovery by a factor of about 3^t.
Zhang, Yunpeng; Li, En, E-mail: lien@uestc.edu.cn; Guo, Gaofeng; Xu, Jiadi; Wang, Chao [School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731 (China)
2014-09-15T23:59:59.000Z
A pair of spot-focusing horn lens antenna is the key component in a free-space measurement system. The electromagnetic constitutive parameters of a planar sample are determined using transmitted and reflected electromagnetic beams. These parameters are obtained from the measured scattering parameters by the microwave network analyzer, thickness of the sample, and wavelength of a focused beam on the sample. Free-space techniques introduced by most papers consider the focused wavelength as the free-space wavelength. But in fact, the incident wave projected by a lens into the sample approximates a Gaussian beam, thus, there has an elongation of the wavelength in the focused beam and this elongation should be taken into consideration in dielectric and magnetic measurement. In this paper, elongation of the wavelength has been analyzed and measured. Measurement results show that the focused wavelength in the vicinity of the focus has an elongation of 1%–5% relative to the free-space wavelength. Elongation's influence on the measurement result of the permittivity and permeability has been investigated. Numerical analyses show that the elongation of the focused wavelength can cause the increase of the measured value of the permeability relative to traditionally measured value, but for the permittivity, it is affected by several parameters and may increase or decrease relative to traditionally measured value.
Thermodynamics of error correction
Sartori, Pablo
2015-01-01T23:59:59.000Z
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Max...
Ali, Zulfiqar
2013-01-01T23:59:59.000Z
measurements with x-ray optics. Part 1: Review of LTP errorsprecise reflective X-ray optics,” Nucl. Inst. and Meth. Ameasurements of x-ray optics. Part 2: Specification for
Abdelhamid Awad Aly Ahmed, Sala
2008-10-10T23:59:59.000Z
QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major... Subject: Computer Science QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY...
Thermodynamics of error correction
Pablo Sartori; Simone Pigolotti
2015-04-24T23:59:59.000Z
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Quantum Error Correction Workshop on
Grassl, Markus
Error Correction Avoiding Errors: Mathematical Model decomposition of the interaction algebra;Quantum Error Correction Designed Hamiltonians Main idea: "perturb the system to make it more stable" · fast (local) control operations = average Hamiltonian with more symmetry (cf. techniques from NMR
Kreinovich, Vladik
, when we need the signal to go only in one direction and do not want to waste energy on the broad beams of light of every eleÂ ment of this matrix; ffl to compute the total energy I of the laser beam by addingÂ ameter consists of placing a matrix of phoÂ toelements on its way, measuring the energy (power) in each
Dynamic Prediction of Concurrency Errors
Sadowski, Caitlin
2012-01-01T23:59:59.000Z
Relation 15 Must-Before Race Prediction 16 Implementation 17viii Abstract Dynamic Prediction of Concurrency Errors bySANTA CRUZ DYNAMIC PREDICTION OF CONCURRENCY ERRORS A
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14T23:59:59.000Z
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Quantum Error Correcting Subsystem Codes From Two Classical Linear Codes
Dave Bacon; Andrea Casaccino
2006-10-17T23:59:59.000Z
The essential insight of quantum error correction was that quantum information can be protected by suitably encoding this quantum information across multiple independently erred quantum systems. Recently it was realized that, since the most general method for encoding quantum information is to encode it into a subsystem, there exists a novel form of quantum error correction beyond the traditional quantum error correcting subspace codes. These new quantum error correcting subsystem codes differ from subspace codes in that their quantum correcting routines can be considerably simpler than related subspace codes. Here we present a class of quantum error correcting subsystem codes constructed from two classical linear codes. These codes are the subsystem versions of the quantum error correcting subspace codes which are generalizations of Shor's original quantum error correcting subspace codes. For every Shor-type code, the codes we present give a considerable savings in the number of stabilizer measurements needed in their error recovery routines.
Sandford, II, Maxwell T. (Los Alamos, NM); Handel, Theodore G. (Los Alamos, NM); Ettinger, J. Mark (Los Alamos, NM)
1999-01-01T23:59:59.000Z
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Approaches to Quantum Error Correction
Julia Kempe
2006-12-21T23:59:59.000Z
The purpose of this little survey is to give a simple description of the main approaches to quantum error correction and quantum fault-tolerance. Our goal is to convey the necessary intuitions both for the problems and their solutions in this area. After characterising quantum errors we present several error-correction schemes and outline the elements of a full fledged fault-tolerant computation, which works error-free even though all of its components can be faulty. We also mention alternative approaches to error-correction, so called error-avoiding or decoherence-free schemes. Technical details and generalisations are kept to a minimum.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL
Rudnyi, Evgenii B.
to apply. The algorithm to maximize a likelihood function in the case of a non-linear physico - the same variances of errors 3.1. One-way classification 3.2. Linear regression 4. Real case (vaporizationSTATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL E.B. Rudnyi Department of Chemistry
Soft Error Modeling and Protection for Sequential Elements Hossein Asadi and Mehdi B. Tahoori
on system-level soft error rate. The number of clock cycles required for an error in a bistable to be propagated to system outputs is used to measure the vulnerability of bistables to soft errors. 1 Introduction, soft errors become the main reliability concern during lifetime operation of digital systems. Soft
Unequal Error Protection Turbo Codes
Henkel, Werner
Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 #12;Unequal Error Protection Turbo Convolutional Codes / Turbo Codes 18 3.1 Structure
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem NotEnergy,ARMFormsGasRelease Date:research community -- hosted byCold Fusion Error
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15T23:59:59.000Z
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Identification of toroidal field errors in a modified betatron accelerator
Loschialpo, P. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States)); Marsh, S.J. (SFA Inc., Landover, Maryland 20785 (United States)); Len, L.K.; Smith, T. (FM Technologies Inc., 10529-B Braddock Road, Fairfax, Virginia 22032 (United States)); Kapetanakos, C.A. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States))
1993-06-01T23:59:59.000Z
A newly developed probe, having a 0.05% resolution, has been used to detect errors in the toroidal magnetic field of the NRL modified betatron accelerator. Measurements indicate that the radial field components (errors) are 0.1%--1% of the applied toroidal field. Such errors, in the typically 5 kG toroidal field, can excite resonances which drive the beam to the wall. Two sources of detected field errors are discussed. The first is due to the discrete nature of the 12 single turn coils which generate the toroidal field. Both measurements and computer calculations indicate that its amplitude varies from 0% to 0.2% as a function of radius. Displacement of the outer leg of one of the toroidal field coils by a few millimeters has a significant effect on the amplitude of this field error. Because of uniform toroidal periodicity of these coils this error is a good suspect for causing the excitation of the damaging [ital l]=12 resonance seen in our experiments. The other source of field error is due to the current feed gaps in the vertical magnetic field coils. A magnetic field is induced inside the vertical field coils' conductor in the opposite direction of the applied toroidal field. Fringe fields at the gaps lead to additional field errors which have been measured as large as 1.0%. This source of field error, which exists at five toroidal locations around the modified betatron, can excite several integer resonances, including the [ital l]=12 mode.
Franklin Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit...
A two reservoir model of quantum error correction
James P. Clemens; Julio Gea-Banacloche
2005-08-22T23:59:59.000Z
We consider a two reservoir model of quantum error correction with a hot bath causing errors in the qubits and a cold bath cooling the ancilla qubits to a fiducial state. We consider error correction protocols both with and without measurement of the ancilla state. The error correction acts as a kind of refrigeration process to maintain the data qubits in a low entropy state by periodically moving the entropy to the ancilla qubits and then to the cold reservoir. We quantify the performance of the error correction as a function of the reservoir temperatures and cooling rate by means of the fidelity and the residual entropy of the data qubits. We also make a comparison with the continuous quantum error correction model of Sarovar and Milburn [Phys. Rev. A 72 012306].
Nested Quantum Error Correction Codes
Zhuo Wang; Kai Sun; Hen Fan; Vlatko Vedral
2009-09-28T23:59:59.000Z
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Data& Error Analysis 1 DATA and ERROR ANALYSIS
Mukasyan, Alexander
Data& Error Analysis 1 DATA and ERROR ANALYSIS Performing the experiment and collecting data learned, you might get a better grade.) Data analysis should NOT be delayed until all of the data. This will help one avoid the problem of spending an entire class collecting bad data because of a mistake
Sensitivity of OFDM Systems to Synchronization Errors and Spatial Diversity
Zhou, Yi
2012-02-14T23:59:59.000Z
jitter cause inter-carrier interference. The overall system performance in terms of symbol error rate is limited by the inter-carrier interference. For a reliable information reception, compensatory measures must be taken. The second part...
Diagnosing multiplicative error by lensing magnification of type Ia supernovae
Zhang, Pengjie
2015-01-01T23:59:59.000Z
Weak lensing causes spatially coherent fluctuations in flux of type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to measure accurately in auto correlation, its cross correlation with cosmic shear and galaxy distribution in overlapping area can be measured to significantly higher accuracy. Therefore these cross correlations can put useful constraint on multiplicative error, and the obtained constraint is free of cosmic variance in weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with $\\sim 1$ million SNe Ia that can be achieved by the proposed D2k survey with the LSST telescope (Zhan et al. 2008), multiplicative error of $\\sim 0.5\\%$ for source galaxies at $z_s\\sim 1$ can be detected and la...
Static Detection of Disassembly Errors
Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K.
2009-10-13T23:59:59.000Z
Static disassembly is a crucial ?rst step in reverse engineering executable ?les, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.
Dynamic Prediction of Concurrency Errors
Sadowski, Caitlin
2012-01-01T23:59:59.000Z
errors in systems code using smt solvers. In Computer Aideddata race witnesses by an SMT-based analysis. In NASA Formalscalability relies on a modern SMT solver and an e?cient
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01T23:59:59.000Z
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Two-Layer Error Control Codes Combining Rectangular and Hamming Product Codes for Cache Error
Zhang, Meilin
We propose a novel two-layer error control code, combining error detection capability of rectangular codes and error correction capability of Hamming product codes in an efficient way, in order to increase cache error ...
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)
2012-07-03T23:59:59.000Z
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Harmonic Analysis Errors in Calculating Dipole,
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to reduce the harmonic field calculation errors. A conformal transfor- mation of a multipole magnet into a dipole reduces these errors. Dipole Magnet Calculations A triangular...
Distributed Error Confinement Extended Abstract
Patt-Shamir, Boaz
. These algorithms can serve as building blocks in more general reactive systems. Previous results in exploring locality in reactive systems were not error confined, and relied on the assump- tion (not used in current, that seems inherent for voting in reactive networks; its analysis leads to an interesting combinatorial
Reducing Collective Quantum State Rotation Errors with Reversible Dephasing
Kevin C. Cox; Matthew A. Norcia; Joshua M. Weiner; Justin G. Bohnet; James K. Thompson
2014-07-16T23:59:59.000Z
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of $2.1 \\times 10^5$ laser cooled and trapped $^{87}$Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Type Measurement Type Measurement Type Measurement Type Measurement Catch Composition - Pelagic codes M Male F Female I Indeterminate U Unknown (not inspected) #12;Type Measurement Type Measurement Type Measurement Type Measurement Photos Comment Length 1 Version 1.2 6/2011 HookNo. Species name
Time reversal in thermoacoustic tomography - an error estimate
Hristova, Yulia
2008-01-01T23:59:59.000Z
The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration
This paper addresses the problem of rejecting interfer- ence due to secondary specular reflections, cross structure, acquisition delay, lack of error recovery, and incorrect modelling of measurement noise. We cause secondary reflections, edges and textures may have a stripe-like appearance, and cross-talk can
Output error identification of hydrogenerator conduit dynamics
Vogt, M.A.; Wozniak, L. (Illinois Univ., Urbana, IL (USA)); Whittemore, T.R. (Bureau of Reclamation, Denver, CO (USA))
1989-09-01T23:59:59.000Z
Two output error model reference adaptive identifiers are considered for estimating the parameters in a reduced order gate position to pressure model for the hydrogenerator. This information may later be useful in an adaptive controller. Gradient and sensitivity functions identifiers are discussed for the hydroelectric application and connections are made between their structural differences and relative performance. Simulations are presented to support the conclusion that the latter algorithm is more robust, having better disturbance rejection and less plant model mismatch sensitivity. For identification from recorded plant data from step gate inputs, the other algorithm even fails to converge. A method for checking the estimated parameters is developed by relating the coefficients in the reduced order model to head, an externally measurable parameter.
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01T23:59:59.000Z
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21T23:59:59.000Z
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Evaluating specific error characteristics of microwave-derived cloud liquid water products
Christopher, Sundar A.
of cloud LWP products globally using concurrent data from visible/ infrared satellite sensors. The approachEvaluating specific error characteristics of microwave-derived cloud liquid water products Thomas J microwave satellite measurements. Using coincident visible/infrared satellite data, errors are isolated
Henry L. Haselgrove; Peter P. Rohde
2007-07-03T23:59:59.000Z
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected with high probability by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the quantum coherent information. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
Flux recovery and a posteriori error estimators
2010-05-20T23:59:59.000Z
bility and the local efficiency bounds for this estimator are established provided that the ... For simple model problems, the energy norm of the true error is equal.
Original Article Error Bounds and Metric Subregularity
2014-06-18T23:59:59.000Z
theory of error bounds of extended real-valued functions. Another objective is to ... Another observation is that neighbourhood V in the original definition of metric.
Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)
Hodge, B. M.; Milligan, M.
2011-07-01T23:59:59.000Z
This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.
Error Mining on Dependency Trees Claire Gardent
Paris-Sud XI, Université de
Error Mining on Dependency Trees Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-l`es-Nancy, F-54600, France shashi.narayan@loria.fr Abstract In recent years, error mining approaches were propose an algorithm for mining trees and ap- ply it to detect the most likely sources of gen- eration
SEU induced errors observed in microprocessor systems
Asenek, V.; Underwood, C.; Oldfield, M. [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre] [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre; Velazco, R.; Rezgui, S.; Cheynet, P. [TIMA Lab., Grenoble (France)] [TIMA Lab., Grenoble (France); Ecoffet, R. [Centre National d`Etudes Spatiales, Toulouse (France)] [Centre National d`Etudes Spatiales, Toulouse (France)
1998-12-01T23:59:59.000Z
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Stabilizer Formalism for Operator Quantum Error Correction
Poulin, D
2005-01-01T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the language of the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure without affecting its essential properties. This opens the path to possible improvement of the error threshold of fault tolerant quantum computing. We also derive a modified Hamming bound that applies to all stabilizer codes, including degenerate ones.
Stabilizer Formalism for Operator Quantum Error Correction
David Poulin
2006-06-14T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 4 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES
Zacks, Jeffrey M.
Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES A computational model of event segmentation from perceptual prediction. Jeremy R. Reynolds, Jeffrey M. Zacks, and Todd S. Braver Washington University Manuscript #12;Prediction Error and Event Boundaries 2 People tend
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)
1997-01-01T23:59:59.000Z
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
Systematic Errors in Future Weak Lensing Surveys: Requirements and Prospects for Self-Calibration
Dragan Huterer; Masahiro Takada; Gary Bernstein; Bhuvnesh Jain
2005-06-02T23:59:59.000Z
We study the impact of systematic errors on planned weak lensing surveys and compute the requirements on their contributions so that they are not a dominant source of the cosmological parameter error budget. The generic types of error we consider are multiplicative and additive errors in measurements of shear, as well as photometric redshift errors. In general, more powerful surveys have stronger systematic requirements. For example, for a SNAP-type survey the multiplicative error in shear needs to be smaller than 1%(fsky/0.025)^{-1/2} of the mean shear in any given redshift bin, while the centroids of photometric redshift bins need to be known to better than 0.003(fsky/0.025)^{-1/2}. With about a factor of two degradation in cosmological parameter errors, future surveys can enter a self-calibration regime, where the mean systematic biases are self-consistently determined from the survey and only higher-order moments of the systematics contribute. Interestingly, once the power spectrum measurements are combined with the bispectrum, the self-calibration regime in the variation of the equation of state of dark energy w_a is attained with only a 20-30% error degradation.
Beware of These Errors when Measuring Intake Rates in Waders
John D. Goss-Custard, Ralph T. Clarke, Selwyn Mcgrorty, Rajarathinavelu Nagarajan, Humphrey P. Sitters, Andy D. West ...
Measurement Errors and Outliers in Seasonal Unit Root Testing
Haldrup, Niels Prof.; Montanes, Antonio; Sansó, Andreu
2000-01-01T23:59:59.000Z
hi4Li|iThiti? UiLu|itiL|*iht? Lh_ih|LU@hh)L|@ThLTih? uihi?i|ih4? i|i TLttM*ii t|i? UiLu|itiL|*ihtci@}@? @TT*)|i t|@|t|hi@|ih_i|@* ? |ihT@Tihc|itiL|*iht@hihi*@|i_|L|iTihL_Luih)
Errors Associated with Sampling and Measurement of Solids
Clark, Shirley E.
Harrisburg; Middletown, PA, USA 2 University of Alabama, Tuscaloosa, AL, USA With assistance from many past
Measure of Diffusion Model Error for Thermal Radiation Transport
Kumar, Akansha
2013-04-19T23:59:59.000Z
: Conservation equation for the left half of cell, m 1 2 ( i;L;g + i;R;g) i 12 ;g + ;i;g hi 2 i;L;g = hi 2 s;g 4 i;L;g + hi 2 s;g 4 (?+ 12) i;L;g (?) i;L;g ; (7.34a) and for the right half of cell, m i...+ 12 ;g 1 2 ( i;L;g + i;R;g) + ;i;g hi 2 i;R;g = hi 2 s;g 4 i;R;g + hi 2 s;g 4 (?+ 12) i;R;g (?) i;R;g ; (7.34b) m 2 1 p 3 ; 1 p 3 ; (7.35a) i;L;g = +i;L;g + i;L;g 2 ; i...
Optimal measurement strategies for effective suppression of drift errors
Yashchuk, Valeriy V.
2009-01-01T23:59:59.000Z
Device for X-ray Optics at BESSY,” Proc. of AIP 705, 847–slope trace obtained with BESSY NOM (courtesy of Frankmeasuring instrument, the BESSY NOM, 16 proves the accuracy
Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling
Louisiana State University; Balman, Mehmet; Kosar, Tevfik
2010-10-27T23:59:59.000Z
Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.
Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp [Japan Synchrotron Radiation Research Institute, SPring-8, Kohto 1-1-1, Sayo, Hyogo 679-5198 (Japan)
2014-10-06T23:59:59.000Z
To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method was approximately less than several percent in the pressure range from 10{sup ?5} Pa to 10{sup ?8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.
Quantum error-correcting codes and devices
Gottesman, Daniel (Los Alamos, NM)
2000-10-03T23:59:59.000Z
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Organizational Errors: Directions for Future Research
Carroll, John Stephen
The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...
Quantum Error Correction for Quantum Memories
Barbara M. Terhal
2015-01-20T23:59:59.000Z
Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.
Probabilistic growth of large entangled states with low error accumulation
Yuichiro Matsuzaki; Simon C Benjamin; Joseph Fitzsimons
2009-08-03T23:59:59.000Z
The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian
2012-05-01T23:59:59.000Z
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
The Error-Pattern-Correcting Turbo Equalizer
Alhussien, Hakim
2010-01-01T23:59:59.000Z
The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.
A systems approach to reducing utility billing errors
Ogura, Nori
2013-01-01T23:59:59.000Z
Many methods for analyzing the possibility of errors are practiced by organizations who are concerned about safety and error prevention. However, in situations where the error occurrence is random and difficult to track, ...
Error Detection and Recovery for Robot Motion Planning with Uncertainty
Donald, Bruce Randall
1987-07-01T23:59:59.000Z
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has ...
Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas
E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team
2010-05-19T23:59:59.000Z
The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E × B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.
Running jobs error: "inet_arp_address_lookup"
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
jobs error: "inetarpaddresslookup" Resolved: Running jobs error: "inetarpaddresslookup" September 22, 2013 by Helen He (0 Comments) Symptom: After the Hopper August 14...
Global Error bounds for systems of convex polynomials over ...
2011-11-11T23:59:59.000Z
This paper is devoted to study the Lipschitzian/Holderian type global error ...... set is not neccessarily compact, we obtain the Hölder global error bound result.
Structural power flow measurement
Falter, K.J.; Keltie, R.F.
1988-12-01T23:59:59.000Z
Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors were found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.
Optimal error estimates for corrected trapezoidal rules
Talvila, Erik
2012-01-01T23:59:59.000Z
Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
Characterization and removal of errors due to local magnetic anomalies in directional drilling of Geophysics, Colorado School of Mines Summary Directional drilling has evolved over the last few decades utilizes a technique known as magnetic Measurement While Drilling (MWD). Vector measurements of geomagnetic
Error Analysis of Heat Transfer for Finned-Tube Heat-Exchanger Text-Board
Chen, Y.; Zhang, J.
2006-01-01T23:59:59.000Z
In order to reduce the measurement error of heat transfer in water and air side for finned-tube heat-exchanger as little as possible, and design a heat-exchanger test-board measurement system economically, based on the principle of test-board system...
An error correcting procedure for imperfect supervised, nonparametric classification
Ferrell, Dennis Ray
1973-01-01T23:59:59.000Z
ON INFORMATION THEORY . is active) . I'or simplicity in writing, Pr(B=B. ) will be ab- j breviated by Pr(B. ), and f(x/B=B ) will be abbreviated by j f (x/B. ) . The basic problem is, upon observing x, to determine j which class is active. If complete... to be B , r (x), is r (x) ( L Pr(B /x) i=1 The conditional probability of error can be minimized over j by assigning to a measurement x, the label value B such that minimizes r (x) . The rule which will do this is Bayes rule, b*. The resulting...
Integrating human related errors with technical errors to determine causes behind offshore accidents
Aamodt, Agnar
Integrating human related errors with technical errors to determine causes behind offshore of offshore accidents there is a continuous focus on safety improvements. An improved evaluation method concepts in the model are structured in hierarchical categories, based on well-established knowledge
Mather, Mara
Running head: STEREOTYPE THREAT REDUCES MEMORY ERRORS Stereotype threat can reduce older adults, 90089-0191. Phone: 213-740-6772. Email: barbersa@usc.edu #12;STEREOTYPE THREAT REDUCES MEMORY ERRORS 2 Abstract (144 words) Stereotype threat often incurs the cost of reducing the amount of information
Uncertainty and error in computational simulations
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01T23:59:59.000Z
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Laser Phase Errors in Seeded FELs
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-03-28T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
On the Error in QR Integration
Dieci, Luca; Van Vleck, Erik
2008-03-07T23:59:59.000Z
] . . . [R(t2, t1) +E2][R(t1, t0) +E1]R(t0) , k = 1, 2, . . . , where Q(tk) is the exact Q-factor at tk and the triangular transitions R(tj , tj?1) are also the exact ones. Moreover, the factors Ej , j = 1, . . . , k, are bounded in norm by the local error... committed during integration of the relevant differential equations; see Theorems 3.1 and 3.16.” We will henceforth simply write (2.7) ?Ej? ? ?, j = 1, 2, . . . , and stress that ? is computable, in fact controllable, in terms of local error tolerances...
A self-checking fiber optic dosimeter for monitoring common errors in brachytherapy applications
Yin, Y.; Lambert, J.; Yang, S.; McKenzie, D. R.; Jackson, M.; Suchowerska, N. [Physics School, University of Sydney, New South Wales 2006 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia) and Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia); Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia) and Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia)
2009-07-15T23:59:59.000Z
Scintillation dosimetry with optical fiber readout [fiber optic dosimetry (FOD)] requires accurate measurement of light intensity. It is therefore vulnerable to loss of calibration if any changes occur in the efficiency of the optical pathway between the scintillator and the light detector. The authors show in this article that common types of errors that arise during clinical use for brachytherapy applications can be quantified using a light emitting diode to stimulate the scintillator, the so-called LED-FOD method, in an integrated and easy-to-use control unit that incorporates a compact peripheral component interconnect extension for instrumentation. Common sources of error include bending and mechanical compression of the fiber optic components and changes in the temperature of the scintillator. The authors show that the method can detect all the common errors studied in this work and that different types of errors can result in different correlations between the LED stimulated signal and the brachytherapy source signal. For a single-type error the LED-FOD can be used easily for system diagnosis and validation with the possibility to correct the dosimeter reading if the correlation between the LED stimulated signal and the brachytherapy source signal can be defined. For more complex errors, resulting from two or more errors occurring simultaneously, the LED-FOD method can also allow the clinician to make a judgment on the reliability of the dosimeter reading. This self-checking method can enhance the clinical robustness of the FOD for achieving accurate dose control.
High Performance Dense Linear System Solver with Soft Error Resilience
Dongarra, Jack
High Performance Dense Linear System Solver with Soft Error Resilience Peng Du, Piotr Luszczek systems, and in some scientific applications C/R is not applicable for soft error at all due to error) high performance dense linear system solver with soft error resilience. By adopting a mathematical
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01T23:59:59.000Z
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
Verifying Volume Rendering Using Discretization Error Analysis
Kirby, Mike
Verifying Volume Rendering Using Discretization Error Analysis Tiago Etiene, Daniel Jo¨nsson, Timo--We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization
Hierarchical Classification of Documents with Error Control
King, Kuo Chin Irwin
Hierarchical Classification of Documents with Error Control Chun-hung Cheng1 , Jian Tang2 , Ada Wai is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents). The traditional method
Hierarchical Classification of Documents with Error Control
Fu, Ada Waichee
Hierarchical Classification of Documents with Error Control Chunhung Cheng 1 , Jian Tang 2 , Ada. Classification is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents
Corley, Megan Anne
1998-01-01T23:59:59.000Z
. In many of the buildings, the ESL opted to use existing flowmeters and differential pressure transmitters installed by contractors for the University. The purpose of this study is to determine measurement error associated with the differential pressure...
Quantum Latin squares and unitary error bases
Benjamin Musto; Jamie Vicary
2015-04-10T23:59:59.000Z
In this paper we introduce quantum Latin squares, combinatorial quantum objects which generalize classical Latin squares, and investigate their applications in quantum computer science. Our main results are on applications to unitary error bases (UEBs), basic structures in quantum information which lie at the heart of procedures such as teleportation, dense coding and error correction. We present a new method for constructing a UEB from a quantum Latin square equipped with extra data. Developing construction techniques for UEBs has been a major activity in quantum computation, with three primary methods proposed: shift-and-multiply, Hadamard, and algebraic. We show that our new approach simultaneously generalizes the shift-and-multiply and Hadamard methods. Furthermore, we explicitly construct a UEB using our technique which we prove cannot be obtained from any of these existing methods.
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-25T23:59:59.000Z
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
Message passing in fault tolerant quantum error correction
Z. W. E. Evans; A. M. Stephens
2008-06-13T23:59:59.000Z
Inspired by Knill's scheme for message passing error detection, here we develop a scheme for message passing error correction for the nine-qubit Bacon-Shor code. We show that for two levels of concatenated error correction, where classical information obtained at the first level is used to help interpret the syndrome at the second level, our scheme will correct all cases with four physical errors. This results in a reduction of the logical failure rate relative to conventional error correction by a factor proportional to the reciprocal of the physical error rate.
Improved measurement accuracy in a Long Trace Profiler: Compensation for laser pointing instability
Irick, S.C.
1993-08-02T23:59:59.000Z
Laser pointing instability adds to the error of slope measurements taken with the Long Trace Profiler (LTP). As with carriage pitch error, this laser pointing error must be accounted for and subtracted from the surface under test (SUT) slope measurement. In the past, a separate reference beam (REF) allowed characterization of the component of slope error from carriage pitch. However, the component of slope error from laser pointing manifests itself differently in the SUT measured slope. An analysis of angle error propagation is given, and the effect of these errors on measured slope is determined. Then a method is proposed for identifying these errors and subtracting them from the measured SUT slope function. Separate measurements of carriage pitch and laser pointing instability isolate these effects, so that the effectiveness of the error identification algorithm may be demonstrated.
Tang, A. Kevin
Correction From Nonlinear Measurements With Applications in Bad Data Detection for Power Networks Weiyu Xu power networks, due to physical constraints, indi- rect nonlinear measurement results--In this paper, we consider the problem of sparse error correction from general nonlinear measurements, which has
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16T23:59:59.000Z
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01T23:59:59.000Z
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
Error Compensation of Single-Qubit Gates in a Surface Electrode Ion Trap Using Composite Pulses
Emily Mount; Chingiz Kabytayev; Stephen Crain; Robin Harper; So-Young Baek; Geert Vrijsen; Steven Flammia; Kenneth R. Brown; Peter Maunz; Jungsang Kim
2015-04-06T23:59:59.000Z
The trapped atomic ion qubits feature desirable properties for use in a quantum computer such as long coherence times (Langer et al., 2005), high qubit measurement fidelity (Noek et al., 2013), and universal logic gates (Home et al., 2009). The quality of quantum logic gate operations on trapped ion qubits has been limited by the stability of the control fields at the ion location used to implement the gate operations. For this reason, the logic gates utilizing microwave fields (Brown et al., 2011; Shappert et al., 2013; Harty et al., 2014) have shown gate fidelities several orders of magnitude better than those using laser fields (Knill et al., 2008; Benhelm et al., 2008; Ballance et al., 2014). Here, we demonstrate low-error single-qubit gates performed using stimulated Raman transitions on an ion qubit trapped in a microfabricated chip trap. Gate errors are measured using a randomized benchmarking protocol (Knill et al., 2008; Wallman et al., 2014; Magesan et al., 2012), where amplitude error in the control beam is compensated using various pulse sequence techniques (Wimperis, 1994; Low et al., 2014). Using B2 compensation (Wimperis, 1994), we demonstrate single qubit gates with an average error per randomized Clifford group gate of $3.6(3)\\times10^{-4}$. We also show that compact palindromic pulse compensation sequences (PD$n$) (Low et al., 2014) compensate for amplitude errors as designed.
Measurement enhancement for state estimation
Chen, Jian
2009-05-15T23:59:59.000Z
in the power system. A robust state estimation should have the capability of keeping the system observable during different contingencies, as well as detecting and identifying the gross errors in measurement set and network topology. However, this capability...
Experimental techniques and measurement accuracies
Bennett, E.F.; Yule, T.J.; DiIorio, G.; Nakamura, T.; Maekawa, H.
1985-02-01T23:59:59.000Z
A brief description of the experimental tools available for fusion neutronics experiments is given. Attention is paid to error estimates mainly for the measurement of tritium breeding ratio in simulated blankets using various techniques.
Reply To "Comment on 'Quantum Convolutional Error-Correcting Codes' "
H. F. Chau
2005-06-02T23:59:59.000Z
In their comment, de Almedia and Palazzo \\cite{comment} discovered an error in my earlier paper concerning the construction of quantum convolutional codes (quant-ph/9712029). This error can be repaired by modifying the method of code construction.
Human error contribution to nuclear materials-handling events
Sutton, Bradley (Bradley Jordan)
2007-01-01T23:59:59.000Z
This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...
Evolved Error Management Biases in the Attribution of Anger
Galperin, Andrew
2012-01-01T23:59:59.000Z
von Hippel, W. , Poore, J. C. , Buss, D. M. , et al. (under27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error
hal-00119494,version1-10Dec2006 Error structures and parameter estimation
Boyer, Edmond
probabilistic approach we have to know the law of the pair (C, C) or equivalently the law of C and the conditional law of C given C. Thus, the study of error transmission is associated to the calculus of images of probability measures. Unfortunately, the knowledge of the law of C given C by means of experiment
MMIII* by M. Kosticwww.kostic.niu.edu Error or Uncertainty Analysis
Kostic, Milivoje M.
Gas Analysis SO2 , NO, NO2 , CO, CO2 , THC, O2Sample Tanks Particle Probe Gas Probe Exhaust DMA1 Â© MMIII* by M. Kosticwww.kostic.niu.edu Unleashing Error or Uncertainty Analysis of Measurement - Differential Mobility Analyzer CNC Â Condensation Nuclei Counter HPLPC Â High Pressure Large Particle Counter
Quantifying Errors Associated with Satellite Sampling of Offshore Wind S.C. Pryor1,2
1 Quantifying Errors Associated with Satellite Sampling of Offshore Wind Speeds S.C. Pryor1,2 , R, Bloomington, IN47405, USA. Tel: 1-812-855-5155. Fax: 1-812-855-1661 Email: spryor@indiana.edu 2 Dept. of Wind an attractive proposition for measuring wind speeds over the oceans because in principle they also offer
Chappell, Nick A
Assessment of slope stability, soil management or contaminant transport problems usually requires numerous stability, soil management or contaminant transport problems requires numerous point measurements AND ERROR ANALYSIS N. A. CHAPPELL 1* AND J. L. TERNAN 2 1 Environmental Science Division, Lancaster
A Probability Model For Errors in the Numerical Solutions of a Partial Di erential Equation
New York at Stoney Brook, State University of
into a petroleum reservoir, and observe the out ow, through production well(s). The rele- vant out ow variable permeability. We measure the solution error as the di#11;erence between the oil production rates (oil cut the extent to which the coarse grid oil production rate is suÆcient to distinguish among geologies
Error Analysis in Nuclear Density Functional Theory
Nicolas Schunck; Jordan D. McDonnell; Jason Sarich; Stefan M. Wild; Dave Higdon
2014-07-11T23:59:59.000Z
Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.
Franklin Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) Environmental Assessments (EA)Budget(DANCE) TargetFormsTrouble Shooting and Error
Edison Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisitingContract ManagementDiscoveringESnet UpdateEarthTrouble Shooting and Error
Susceptibility of Commodity Systems and Software to Memory Soft Errors
Riska, Alma
Susceptibility of Commodity Systems and Software to Memory Soft Errors Alan Messer, Member, IEEE Abstract--It is widely understood that most system downtime is acounted for by programming errors transient errors in computer system hardware due to external factors, such as cosmic rays. This work
A Taxonomy of Number Entry Error Sarah Wiseman
Cairns, Paul
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 350 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
A Taxonomy of Number Entry Error Sarah Wiseman
Subramanian, Sriram
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
Predictors of Threat and Error Management: Identification of Core
Predictors of Threat and Error Management: Identification of Core Nontechnical Skills In normal flight operations, crews are faced with a variety of external threats and commit a range of errors of these threats and errors therefore forms an essential element of enhancing performance and minimizing risk
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2004-01-01T23:59:59.000Z
Current-controlled logic and single electron logic processors have been investigated with respect to thermal-induced bit error. A maximal error rate for both logic processors is regarded as one bit-error/year/chip. A maximal clock frequency...
Bolstered Error Estimation Ulisses Braga-Neto a,c
Braga-Neto, Ulisses
the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian
Polian, Ilia
of soft errors in modern microprocessors has been reported to never lead to a system failure. Any techniques are enhanced by a methodology to handle soft errors on address bits. Furthermore, we demonstrate]. Consequently, many state-of-the art systems provide soft error detection and correction capabilities [Hass 89
Technological Advancements and Error Rates in Radiation Therapy Delivery
Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States)
2011-11-15T23:59:59.000Z
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.
Locked modes and magnetic field errors in MST
Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.
1992-06-01T23:59:59.000Z
In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.
Nikolopoulos, Georgios M.; Ranade, Kedar S.; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany)
2006-03-15T23:59:59.000Z
We investigate the error tolerance of quantum cryptographic protocols using d-level systems. In particular, we focus on prepare-and-measure schemes that use two mutually unbiased bases and a key-distillation procedure with two-way classical communication. For arbitrary quantum channels, we obtain a sufficient condition for secret-key distillation which, in the case of isotropic quantum channels, yields an analytic expression for the maximally tolerable error rate of the cryptographic protocols under consideration. The difference between the tolerable error rate and its theoretical upper bound tends slowly to zero for sufficiently large dimensions of the information carriers.
Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks
Li, Ruoxin
2010-01-01T23:59:59.000Z
We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.
Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks
Ruoxin Li; Kurt Gibble
2010-08-09T23:59:59.000Z
We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.
In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors
Przasnyski, Zbigniew; Seal, Kala Chand
2011-01-01T23:59:59.000Z
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01T23:59:59.000Z
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
E791 DATA ACQUISITION SYSTEM Error reports received ; no new errors reported
Fermi National Accelerator Laboratory
of events written to tape. 18 #12; Error and Status Displays Mailbox For Histogram Requests VaxÂonline Event Display VAX 11 / 780 Event Reconstruction Event Display Detector Monitoring 3 VAX Workstations 42 EXABYTE of the entire E791 DA system. The VAX 11/780 was the user interface to the VME part of the system, via the DA
Graphical Quantum Error-Correcting Codes
Sixia Yu; Qing Chen; C. H. Oh
2007-09-12T23:59:59.000Z
We introduce a purely graph-theoretical object, namely the coding clique, to construct quantum errorcorrecting codes. Almost all quantum codes constructed so far are stabilizer (additive) codes and the construction of nonadditive codes, which are potentially more efficient, is not as well understood as that of stabilizer codes. Our graphical approach provides a unified and classical way to construct both stabilizer and nonadditive codes. In particular we have explicitly constructed the optimal ((10,24,3)) code and a family of 1-error detecting nonadditive codes with the highest encoding rate so far. In the case of stabilizer codes a thorough search becomes tangible and we have classified all the extremal stabilizer codes up to 8 qubits.
Quantum Error Correction with magnetic molecules
José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas
2014-08-22T23:59:59.000Z
Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.
Ali, Zulfiqar
2013-01-01T23:59:59.000Z
267-70 (2001). 2. P. Z. Takacs, “X-ray optics metrology,” inHandbook of Optics, 3rd ed. , Vol. V , M. Bass, Ed. ,X-ray Grazing Incidence Optics for European XFEL: Analysis
Huang, Weidong
2011-01-01T23:59:59.000Z
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01T23:59:59.000Z
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
Trial application of a technique for human error analysis (ATHEANA)
Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Parry, G.W. [NUS, Gaithersburg, MD (United States)] [and others
1996-10-01T23:59:59.000Z
The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.
Temperature-dependent errors in nuclear lattice simulations
Dean Lee; Richard Thomson
2007-01-17T23:59:59.000Z
We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local "well-tempered" lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.
Error estimates for the Euler discretization of an optimal control ...
Joseph FrÃ©dÃ©ric Bonnans
2014-12-10T23:59:59.000Z
Dec 10, 2014 ... Abstract: We study the error introduced in the solution of an optimal control problem with first order state constraints, for which the trajectories ...
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Per Carlson; Conny Wannemark
2005-05-10T23:59:59.000Z
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Optimized Learning with Bounded Error for Feedforward Neural Networks
Maggiore, Manfredi
Optimized Learning with Bounded Error for Feedforward Neural Networks A. Alessandri, M. Sanguineti-based learnings. A. Alessandri is with the Naval Automatio
New Fractional Error Bounds for Polynomial Systems with ...
2014-07-27T23:59:59.000Z
Our major result extends the existing error bounds from the system involving only a ... linear complementarity systems with polynomial data as well as high-order ...
Homological Error Correction: Classical and Quantum Codes
H. Bombin; M. A. Martin-Delgado
2006-05-10T23:59:59.000Z
We prove several theorems characterizing the existence of homological error correction codes both classically and quantumly. Not every classical code is homological, but we find a family of classical homological codes saturating the Hamming bound. In the quantum case, we show that for non-orientable surfaces it is impossible to construct homological codes based on qudits of dimension $D>2$, while for orientable surfaces with boundaries it is possible to construct them for arbitrary dimension $D$. We give a method to obtain planar homological codes based on the construction of quantum codes on compact surfaces without boundaries. We show how the original Shor's 9-qubit code can be visualized as a homological quantum code. We study the problem of constructing quantum codes with optimal encoding rate. In the particular case of toric codes we construct an optimal family and give an explicit proof of its optimality. For homological quantum codes on surfaces of arbitrary genus we also construct a family of codes asymptotically attaining the maximum possible encoding rate. We provide the tools of homology group theory for graphs embedded on surfaces in a self-contained manner.
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01T23:59:59.000Z
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK
Ellis, Dan
ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK Manuel J. Reyes. This structure reduces the error rate on the Aurora 2 noisy English digits task by more than 50% compared development of tandem systems showed an improvement in the performance on the Aurora task [2] of these systems
Numerical Construction of Likelihood Distributions and the Propagation of Errors
J. Swain; L. Taylor
1997-12-12T23:59:59.000Z
The standard method for the propagation of errors, based on a Taylor series expansion, is approximate and frequently inadequate for realistic problems. A simple and generic technique is described in which the likelihood is constructed numerically, thereby greatly facilitating the propagation of errors.
Calibration and Error in Placental Molecular Clocks: A Conservative
Hadly, Elizabeth
Calibration and Error in Placental Molecular Clocks: A Conservative Approach Using for calibrating both mitogenomic and nucleogenomic placental timescales. We applied these reestimates to the most calibration error may inflate the power of the molecular clock when testing the time of ordinal
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Bai, Zhaojun
gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance for integrated groundwater models, which are implicitly coupled to another model, such as surface water models the correspondence between the residual error in the preconditioned linear system and the solution error. Using
PROPAGATION OF ERRORS IN SPATIAL ANALYSIS Peter P. Siska
Hung, I-Kuai
, the conversion of data from analog to digital form used to be an extremely time-consuming process. At present process then the resulting error is inflated up to 20 percent for each grid cell of the final map. The magnitude of errors naturally increases with an addition of every new layer entering the overlay process
Error detection through consistency checking Peng Gong* Lan Mu#
Silver, Whendee
Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers
Mutual information, bit error rate and security in Wójcik's scheme
Zhanjun Zhang
2004-02-21T23:59:59.000Z
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.
Uniform and optimal error estimates of an exponential wave ...
2014-05-01T23:59:59.000Z
of the error propagation, cut-off of the nonlinearity, and the energy method. ...... gives Lemma 3.4 for the local truncation error, which is of spectral order in ... estimates, we adopt a strategy similar to the finite difference method [4] (cf. diagram.
Quasi-sparse eigenvector diagonalization and stochastic error correction
Dean Lee
2000-08-30T23:59:59.000Z
We briefly review the diagonalization of quantum Hamiltonians using the quasi-sparse eigenvector (QSE) method. We also introduce the technique of stochastic error correction, which systematically removes the truncation error of the QSE result by stochastically sampling the contribution of the remaining basis states.
Mining API Error-Handling Specifications from Source Code
Xie, Tao
Mining API Error-Handling Specifications from Source Code Mithun Acharya and Tao Xie Department it difficult to mine error-handling specifications through manual inspection of source code. In this paper, we, without any user in- put. In our framework, we adapt a trace generation technique to distinguish
Entanglement and Quantum Error Correction with Superconducting Qubits
Entanglement and Quantum Error Correction with Superconducting Qubits A Dissertation Presented David Reed All rights reserved. #12;Entanglement and Quantum Error Correction with Superconducting is to use superconducting quantum bits in the circuit quantum electro- dynamics (cQED) architecture. There
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error
Richardson, David
may not even exist. For this reason we investigate error detection and recovery (EDR) strategies. We may not even exist. For this reason we investigate error detection and recovery (EDR ) strategies. We and implementational questions remain. The second contribution is a formal, geometric approach to EDR. While EDR
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk [Department of Mathematics, Royal Holloway University of London, Egham TW20 0EX (United Kingdom); Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent (Belgium); Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com [Física Teòrica: Informació i Fenomens Quàntics, Universitat Autònoma de Barcelona, ES-08193 Bellaterra, Barcelona (Spain); Mathematical Institute, Budapest University of Technology and Economics, Egry József u 1., Budapest 1111 (Hungary)
2014-10-15T23:59:59.000Z
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states ?{sub 1}, …, ?{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(?{sub 1}, …, ?{sub r}), as recently introduced by Nussbaum and Szko?a in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j
Hess-Flores, M
2011-11-10T23:59:59.000Z
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.
An Efficient Approach towards Mitigating Soft Errors Risks
Sadi, Muhammad Sheikh; Uddin, Md Nazim; Jürjens, Jan
2011-01-01T23:59:59.000Z
Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today's nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant 'device shrinking' reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only 'critical code blocks' and 'critical variables' rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.
Grid-scale Fluctuations and Forecast Error in Wind Power
G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi
2015-03-29T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Grid-scale Fluctuations and Forecast Error in Wind Power
Bel, G; Toots, M; Bandi, M M
2015-01-01T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Quantum Error Correcting Codes and the Security Proof of the BB84 Protocol
Ramesh Bhandari
2014-08-30T23:59:59.000Z
We describe the popular BB84 protocol and critically examine its security proof as presented by Shor and Preskill. The proof requires the use of quantum error correcting codes called the Calderbank-Shor-Steanne (CSS) quantum codes. These quantum codes are constructed in the quantum domain from two suitable classical linear codes, one used to correct for bit-flip errors and the other for phase-flip errors. Consequently, as a prelude to the security proof, the report reviews the essential properties of linear codes, especially the concept of cosets, before building the quantum codes that are utilized in the proof. The proof considers a security entanglement-based protocol, which is subsequently reduced to a "Prepare and Measure" protocol similar in structure to the BB84 protocol, thus establishing the security of the BB84 protocol. The proof, however, is not without assumptions, which are also enumerated. The treatment throughout is pedagogical, and this report, therefore, serves a useful tutorial for researchers, practitioners, and students, new to the field of quantum information science, in particular, quantum cryptography, as it develops the proof in a systematic manner, starting from the properties of linear codes, and then advancing to the quantum error correcting codes, which are critical to the understanding of the security proof.
Trapped Ion Quantum Error Correcting Protocols Using Only Global Operations
Joseph F. Goodwin; Benjamin J. Brown; Graham Stutter; Howard Dale; Richard C. Thompson; Terry Rudolph
2014-07-07T23:59:59.000Z
Quantum error-correcting codes are many-body entangled states that are prepared and measured using complex sequences of entangling operations. Each element of such an entangling sequence introduces noise to delicate quantum information during the encoding or reading out of the code. It is important therefore to find efficient entangling protocols to avoid the loss of information. Here we propose an experiment that uses only global entangling operations to encode an arbitrary logical qubit to either the five-qubit repetition code or the five-qubit code, with a six-ion Coulomb crystal architecture in a Penning trap. We show that the use of global operations enables us to prepare and read out these codes using only six and ten global entangling pulses, respectively. The proposed experiment also allows the acquisition of syndrome information during readout. We provide a noise analysis for the presented protocols, estimating that we can achieve a six-fold improvement in coherence time with noise as high as $\\sim 1\\%$ on each entangling operation.
Error-Induced Beam Degradation in Fermilab's Accelerators
Yoon, Phil S.; /Rochester U.
2007-08-01T23:59:59.000Z
In Part I, three independent models of Fermilab's Booster synchrotron are presented. All three models are constructed to investigate and explore the effects of unavoidable machine errors on a proton beam under the influence of space-charge effects. The first is a stochastic noise model. Electric current fluctuations arising from power supplies are ubiquitous and unavoidable and are a source of instabilities in accelerators of all types. A new noise module for generating the Ornstein-Uhlenbeck (O-U) stochastic noise is first created and incorporated into the existing Object-oriented Ring Beam Injection and Tracking (ORBIT-FNAL) package. After being convinced with a preliminary model that the noise, particularly non-white noise, does matter to beam quality, we proceeded to measure directly current ripples and common-mode voltages from all four Gradient Magnet Power Supplies (GMPS). Then, the current signals are Fourier-analyzed. Based upon the power spectra of current signals, we tune up the Ornstein-Uhlnbeck noise model. As a result, we are able to closely match the frequency spectra between current measurements and the modeled O-U stochastic noise. The stochastic noise modeled upon measurements is applied to the Booster beam in the presence of the full space-charge effects. This noise model, accompanied by a suite of beam diagnostic calculations, manifests that the stochastic noise, impinging upon the beam and coupled to the space-charge effects, can substantially enhance the beam degradation process throughout the injection period. The second model is a magnet misalignment model. It is the first time to utilize the latest beamline survey data for building a magnet-by-magnet misalignment model. Given as-found survey fiducial coordinates, we calculate all types of magnet alignment errors (station error, pitch, yaw, roll, twists, etc.) are implemented in the model. We then follow up with statistical analysis to understand how each type of alignment errors are currently distributed around the Booster ring. The ORBIT-FNAL simulations with space charge included show that rolled magnets, in particular, have substantial effects on the Booster beam. This survey-data-based misalignment model can predict how much improvement in machine performance can be achieved if prioritized or selected realignment work is done. In other words, this model can help us investigate different realignment scenarios for the Booster. In addition, by calculating average angular kicks from all misaligned magnets, we expect this misalignment model to serve as guidelines for resetting the strengths of corrector magnets. The third model for the Booster is a time-structured multi-turn injection model. Microbunch-injection scenarios with different time structures are explored in the presence of longitudinal space-charge force. Due to the radio-frequency (RF) bucket mismatch between the Booster and the 400-MeV transferline, RF-phase offsets can be parasitically introduced during the injection process. Using the microbunch multiturn injection, we carry out ESME-ORBIT-combined simulations. This combined simulation allows us to investigate realistic charge-density distribution under full space-charge effects. The growth rates of transverse emittances turned out to be 20 % in both planes. This microbunch-injection scenarios is also applicable to the future 8-GeV Superconducting Linac Proton Driver and the upgraded Main Injector at Fermilab. In Part II, the feasibility of momentum-stacking method of proton beams is investigated. When the Run2 collider program at Fermilab comes to an end around year 2009, the present antiproton source can be available for other purposes. One possible application is to convert the antiproton accumulator to a proton accumulator, so that the beam power from the Main Injector could be enhanced by a factor of four. Through adiabatic processes and optimized parameters of synchrotron motion, we demonstrate with an aid of the ESME code that up to four proton batches can be stacked in the momentum acceptance available for the Accumulator ri
Logical Error Rate Scaling of the Toric Code
Fern H. E. Watson; Sean D. Barrett
2014-09-26T23:59:59.000Z
To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead -- the total number of physical qubits required to perform error correction.
A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Xu W.; Lauer,K.; Chu,Y.; Nazaretski,E.
2014-10-02T23:59:59.000Z
A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01T23:59:59.000Z
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Universal Framework for Quantum Error-Correcting Codes
Zhuo Li; Li-Juan Xing
2009-01-04T23:59:59.000Z
We present a universal framework for quantum error-correcting codes, i.e., the one that applies for the most general quantum error-correcting codes. This framework is established on the group algebra, an algebraic notation for the nice error bases of quantum systems. The nicest thing about this framework is that we can characterize the properties of quantum codes by the properties of the group algebra. We show how it characterizes the properties of quantum codes as well as generates some new results about quantum codes.
Reference Undulator Measurement Results
Wolf, Zachary; Levashov, Yurii; /SLAC; ,
2011-08-18T23:59:59.000Z
The LCLS reference undulator has been measured 22 times during the course of undulator tuning. These measurements provide estimates of various statistical errors. This note gives a summary of the reference undulator measurements and it provides estimates of the undulator tuning errors. We measured the reference undulator many times during the tuning of the LCLS undulators. These data sets give estimates of the random errors in the tuned undulators. The measured trajectories in the reference undulator are stable and straight to within {+-}2 {micro}m. Changes in the phase errors are less than {+-}2 deg between data sets. The phase advance in the cell varies by less than {+-}2 deg between data sets. The rms variation between data sets of the first integral of B{sub x} is 9.98 {micro}Tm, and the rms variation of the second integral of B{sub x} is 17.4 {micro}Tm{sup 2}. The rms variation of the first integral of B{sub y} is 6.65 {micro}Tm, and the rms variation of the second integral of B{sub y} is 12.3 {micro}Tm{sup 2}. The rms variation of the x-position of the fiducialized beam axis is 35 {micro}m in the final production run This corresponds to an rms uncertainty in the K value of {Delta}K/K = 2.7 x 10{sup -5}. The rms variation of the y-position of the fiducialized beam axis is 4 {micro}m in the final production run.
Radar range measurements in the atmosphere.
Doerry, Armin Walter
2013-02-01T23:59:59.000Z
The earth's atmosphere affects the velocity of propagation of microwave signals. This imparts a range error to radar range measurements that assume the typical simplistic model for propagation velocity. This range error is a function of atmospheric constituents, such as water vapor, as well as the geometry of the radar data collection, notably altitude and range. Models are presented for calculating atmospheric effects on radar range measurements, and compared against more elaborate atmospheric models.
Servo control booster system for minimizing following error
Wise, William L. (Mountain View, CA)
1985-01-01T23:59:59.000Z
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
A Posteriori Error Estimation for - Department of Mathematics ...
Shuhao Cao supervised under Professor Zhiqiang Cai
2013-10-31T23:59:59.000Z
Oct 19, 2013 ... the “correct” Hilbert space the true flux µ?1?×u lies in, to recover a ...... The error heat map shows that ZZ-patch recovery estimator leads.
Quantum error correcting codes based on privacy amplification
Zhicheng Luo
2008-08-10T23:59:59.000Z
Calderbank-Shor-Steane (CSS) quantum error-correcting codes are based on pairs of classical codes which are mutually dual containing. Explicit constructions of such codes for large blocklengths and with good error correcting properties are not easy to find. In this paper we propose a construction of CSS codes which combines a classical code with a two-universal hash function. We show, using the results of Renner and Koenig, that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. While the bit-flip errors can be decoded as efficiently as the classical code used, the problem of efficiently decoding the phase-flip errors remains open.
avoid vocal errors: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Error Avoiding Quantum Codes Quantum Physics (arXiv) Summary: The existence is proved of a class of open quantum...
Rateless and rateless unequal error protection codes for Gaussian channels
Boyle, Kevin P. (Kevin Patrick)
2007-01-01T23:59:59.000Z
In this thesis we examine two different rateless codes and create a rateless unequal error protection code, all for the additive white Gaussian noise (AWGN) channel. The two rateless codes are examined through both analysis ...
An Approximation Algorithm for Constructing Error Detecting Prefix ...
2006-09-02T23:59:59.000Z
Sep 2, 2006 ... 2-bit Hamming prefix code problem. Our algorithm spends O(n log3 n) time to calculate a 2-bit. Hamming prefix code with an additive error of at ...
Secured Pace Web Server with Collaboration and Error Logging Capabilities
Tao, Lixin
: Secure Sockets Layer (SSL) using the Java Secure Socket Extension (JSSE) API, error logging............................................................................................ 8 Chapter 3 Secure Pace Web Server with SSL........................................................... 29 3.1 Introduction to SSL
Transition state theory: Variational formulation, dynamical corrections, and error estimates
Van Den Eijnden, Eric
Transition state theory: Variational formulation, dynamical corrections, and error estimates Eric, Brazil Received 18 February 2005; accepted 9 September 2005; published online 7 November 2005 Transition which aim at computing dynamical corrections to the TST transition rate constant. The theory
YELLOW SEA ACOUSTIC UNCERTAINTY CAUSED BY HYDROGRAPHIC DATA ERROR
Chu, Peter C.
the littoral and blue waters. After a weapon platform has detected its targets, the sensors on torpedoes, bathymetry, bottom type, and sound speed profiles. Here, the effect of sound speed errors (i.e., hydrographic
Strontium-90 Error Discovered in Subcontract Laboratory Spreadsheet
D. D. Brown A. S. Nagel
1999-07-31T23:59:59.000Z
West Valley Demonstration Project health physicists and environment scientists discovered a series of errors in a subcontractor's spreadsheet being used to reduce data as part of their strontium-90 analytical process.
Sample covariance based estimation of Capon algorithm error probabilities
Richmond, Christ D.
The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...
TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit
Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan
Kaeli, David R.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction
Coding Techniques for Error Correction and Rewriting in Flash Memories
Mohammed, Shoeb Ahmed
2010-10-12T23:59:59.000Z
CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2010 Major Subject: Electrical Engineering CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial...
Systematic errors in current quantum state tomography tools
Christian Schwemmer; Lukas Knips; Daniel Richart; Tobias Moroder; Matthias Kleinmann; Otfried Gühne; Harald Weinfurter
2014-07-22T23:59:59.000Z
Common tools for obtaining physical density matrices in experimental quantum state tomography are shown here to cause systematic errors. For example, using maximum likelihood or least squares optimization for state reconstruction, we observe a systematic underestimation of the fidelity and an overestimation of entanglement. A solution for this problem can be achieved by a linear evaluation of the data yielding reliable and computational simple bounds including error bars.
Fault-Tolerant Thresholds for Encoded Ancillae with Homogeneous Errors
Bryan Eastin
2006-11-14T23:59:59.000Z
I describe a procedure for calculating thresholds for quantum computation as a function of error model given the availability of ancillae prepared in logical states with independent, identically distributed errors. The thresholds are determined via a simple counting argument performed on a single qubit of an infinitely large CSS code. I give concrete examples of thresholds thus achievable for both Steane and Knill style fault-tolerant implementations and investigate their relation to threshold estimates in the literature.
SU-E-T-152: Error Sensitivity and Superiority of a Protocol for 3D IMRT Quality Assurance
Gueorguiev, G [Massachusetts General Hospital, Boston, MA (United States); University of Massachusetts Lowell, Lowell, MA (United States); Cotter, C; Turcotte, J; Sharp, G; Crawford, B [Massachusetts General Hospital, Boston, MA (United States); Mah'D, M [University of Massachusetts Lowell, Lowell, MA (United States)
2014-06-01T23:59:59.000Z
Purpose: To test if the parameters included in our 3D QA protocol with current tolerance levels are able to detect certain errors and show the superiority of 3D QA method over single ion chamber measurements and 2D gamma test by detecting most of the introduced errors. The 3D QA protocol parameters are: TPS and measured average dose difference, 3D gamma test with 3mmDTA/3% test parameters, and structure volume for which the TPS predicted and measured absolute dose difference is greater than 6%. Methods: Two prostate and two thoracic step-and-shoot IMRT patients were investigated. The following errors were introduced to each original treatment plan: energy switched from 6MV to 10MV, linac jaws retracted to 15cmx15cm, 1,2,3 central MLC leaf pairs retracted behind the jaws, single central MLC leaf put in or out of the treatment field, Monitor Units (MU) increased and decreased by 1 and 3%, collimator off by 5 and 15 degrees, detector shifted by 5mm to the left and right, gantry treatment angle off by 5 and 15 degrees. QA was performed on each plan using single ion chamber, 2D ion chamber array for 2D gamma analysis and using IBA's COMPASS system for 3D QA. Results: Out of the three tested QA methods single ion chamber performs the worst not detecting subtle errors. 3D QA proves to be the superior out of the three methods detecting all of introduced errors, except 10MV and 1% MU change, and MLC rotated (those errors were not detected by any QA methods tested). Conclusion: As the way radiation is delivered evolves, so must the QA. We believe a diverse set of 3D statistical parameters applied both to OAR and target plan structures provides the highest level of QA.
Jeong, Jaehoon "Paul"
Internet Measurement- System A Measurement- System B Control System GPS Satellite GPS Satellite GPS Receiver GPS Receiver 2) measurement 3) data1) command Methodology for One-way IP Performance Measurement This paper proposes a methodology for measurement of one-way IP performance metrics such as one-way delay
A new and efficient error resilient entropy code for image and video compression
Min, Jungki
1999-01-01T23:59:59.000Z
Image and video compression standards such as JPEG, MPEG, H.263 are severely sensitive to errors. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization causes the worst result. Even one bit error...
Error Monitoring: A Learning Strategy for Improving Academic Performance of LD Adolescents
Schumaker, Jean B.; Deshler, Donald D.; Nolan, Susan; Clark, Frances L.; Alley, Gordon R.; Warner, Michael M.
1981-04-01T23:59:59.000Z
Error monitoring, a learning strategy for detecting and correcting errors in written products, was taught to nine learning disabled adolescents. Students could detect and correct more errors after they received training ...
Assessing the Impact of Differential Genotyping Errors on Rare Variant Tests of Association
Fast, Shannon Marie
Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential ...
Wang, S.; Sun, Y.; Huang, G.; Zhu, N.
, according to the surveys in Hong Kong and elsewhere, the direct measurement of building cooling load cannot always provide reliable measurements of building cooling load in practice due to the noises, outliers and systematic errors in measuring the water... ? is the water density (kg/L). In practice, w M is usually measured by water flow meters and and are measured by temperature sensors. It is known that these measurements are easily corrupted by measurement noises, outliers or systematic errors...
SHEAN (Simplified Human Error Analysis code) and automated THERP
Wilson, J.R.
1993-06-01T23:59:59.000Z
One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN.
LHC Network Measurement Joe Metzger
1 LHC Network Measurement Joe Metzger Nov 6 2007 LHCOPN Meeting at CERN Energy Sciences Network & Capacity RRDMA Input Errors & Output Drops PS-SNMPMA Done ?? Beta Aug 1, Package Sep 1 Visualize perf On-demand AMI MA & Scheduler Hades Owamp MP Beta Sep 15, Package Oct 1 October Done Archive perf
Development of an integrated system for estimating human error probabilities
Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.
1998-12-01T23:59:59.000Z
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-01-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst's understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-05-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst`s understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Non-Gaussian numerical errors versus mass hierarchy
Y. Meurice; M. B. Oktay
2000-05-12T23:59:59.000Z
We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.
Meta learning of bounds on the Bayes classifier error
Moon, Kevin R; Hero, Alfred O
2015-01-01T23:59:59.000Z
Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.
Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU
Haque, Imran S
2009-01-01T23:59:59.000Z
Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...
Peak, Derek
Are you getting an error message in UniFi Plus? (suggestion...check the auto-hint line!) In most cases, Unifi Plus does not prominently display error messages; instead, the error message and processing messages Keyboard shortcuts Instructions for accessing other blocks, windows or forms from
Error estimates and specification parameters for functional renormalization
Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)
2013-07-15T23:59:59.000Z
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned
Frank Marhauser
2011-09-01T23:59:59.000Z
Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
Guth, Larry, E-mail: lguth@math.mit.edu [Department of Mathematics, MIT, Cambridge, Massachusetts 02139 (United States); Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il [Institute of Mathematics, Hebrew University, Jerusalem 91904 (Israel)
2014-08-15T23:59:59.000Z
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ?}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
Full protection of superconducting qubit systems from coupling errors
M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley
2005-08-09T23:59:59.000Z
Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.
Laser Phase Errors in Seeded Free Electron Lasers
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-04-17T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
Correctable noise of Quantum Error Correcting Codes under adaptive concatenation
Jesse Fern
2008-02-27T23:59:59.000Z
We examine the transformation of noise under a quantum error correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum fault tolerant thresholds.
Low-Cost Hardening of Image Processing Applications Against Soft Errors Ilia Polian1,2
Polian, Ilia
, and their hardening against soft errors becomes an issue. We propose a methodology to identify soft errors as uncritical based on their impact on the system's functionality. We call a soft error uncritical if its impact are imperceivable for the human user of the system. We focus on soft errors in the motion esti- mation subsystem
Distinguishing congestion and error losses: an ECN/ELN based scheme
Kamakshisundaram, Raguram
2001-01-01T23:59:59.000Z
error rates, like wireless links, packets are lost more due to error than due to congestion. But TCP does not differentiate between error and congestion losses and hence reduces the sending rate for losses due to error also, which unnecessarily reduces...
Designing Automation to Reduce Operator Errors Nancy G. Leveson
Leveson, Nancy
Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of modeÂrelated problems [SW95]. After studying accidents and incidents in the new, highly automated
Energy efficiency of error correction for wireless communication
Havinga, Paul J.M.
-control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost
Effects of errors in the solar radius on helioseismic inferences
Sarbani Basu
1997-12-09T23:59:59.000Z
Frequencies of intermediate-degree f-modes of the Sun seem to indicate that the solar radius is smaller than what is normally used in constructing solar models. We investigate the possible consequences of an error in radius on results for solar structure obtained using helioseismic inversions. It is shown that solar sound speed will be overestimated if oscillation frequencies are inverted using reference models with a larger radius. Using solar models with radius of 695.78 Mm and new data sets, the base of the solar convection zone is estimated to be at radial distance of $0.7135\\pm 0.0005$ of the solar radius. The helium abundance in the convection zone as determined using models with OPAL equation of state is $0.248\\pm 0.001$, where the errors reflect the estimated systematic errors in the calculation, the statistical errors being much smaller. Assuming that the OPAL opacities used in the construction of the solar models are correct, the surface $Z/X$ is estimated to be $0.0245\\pm 0.0006$.
Two infinite families of nonadditive quantum error-correcting codes
Sixia Yu; Qing Chen; C. H. Oh
2009-01-14T23:59:59.000Z
We construct explicitly two infinite families of genuine nonadditive 1-error correcting quantum codes and prove that their coding subspaces are 50% larger than those of the optimal stabilizer codes of the same parameters via the linear programming bound. All these nonadditive codes can be characterized by a stabilizer-like structure and thus their encoding circuits can be designed in a straightforward manner.
Threshold error rates for the toric and surface codes
D. S. Wang; A. G. Fowler; A. M. Stephens; L. C. L. Hollenberg
2009-05-05T23:59:59.000Z
The surface code scheme for quantum computation features a 2d array of nearest-neighbor coupled qubits yet claims a threshold error rate approaching 1% (NJoP 9:199, 2007). This result was obtained for the toric code, from which the surface code is derived, and surpasses all other known codes restricted to 2d nearest-neighbor architectures by several orders of magnitude. We describe in detail an error correction procedure for the toric and surface codes, which is based on polynomial-time graph matching techniques and is efficiently implementable as the classical feed-forward processing step in a real quantum computer. By direct simulation of this error correction scheme, we determine the threshold error rates for the two codes (differing only in their boundary conditions) for both ideal and non-ideal syndrome extraction scenarios. We verify that the toric code has an asymptotic threshold of p = 15.5% under ideal syndrome extraction, and p = 7.8 10^-3 for the non-ideal case, in agreement with prior work. Simulations of the surface code indicate that the threshold is close to that of the toric code.
RESIDUAL TYPE A POSTERIORI ERROR ESTIMATES FOR ELLIPTIC OBSTACLE PROBLEMS
Nochetto, Ricardo H.
to double obstacle problems are briefly discussed. Key words. a posteriori error estimates, residual Science Foundation under the grant No.19771080 and China National Key Project ``Large Scale Scientific\\Gamma satisfies / Å¸ 0 on @ and K is the convex set of admissible displacements K := fv 2 H 1 0(\\Omega\\Gamma : v
Multilayer Perceptron Error Surfaces: Visualization, Structure and Modelling
Gallagher, Marcus
. This is commonly formulated as a multivariate nonÂlinear optimization problem over a very highÂdimensional space of analysis are not wellÂsuited to this problem. Visualizing and describÂ ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method
Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling
Gallagher, Marcus
. This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of analysis are not well-suited to this problem. Visualizing and describ- ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method
Analysis of possible systematic errors in the Oslo method
A. C. Larsen; M. Guttormsen; M. Krticka; E. Betak; A. Bürger; A. Görgen; H. T. Nyhus; J. Rekstad; A. Schiller; S. Siem; H. K. Toft; G. M. Tveten; A. V. Voinov; K. Wikan
2012-11-27T23:59:59.000Z
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of level density and gamma-ray transmission coefficient from a set of particle-gamma coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller
Xuan, Dong
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller , Nagarjuna and Computer Engineering The Ohio State University {millerti,teodores}@cse.ohio-state.edu, nagarjun. To deal with these com- peting trends, energy-efficient solutions are needed to deal with reli- ability
A Method for Treating Discretization Error in Nondeterministic Analysis
Alvin, K.F.
1999-01-27T23:59:59.000Z
A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.
Considering Workload Input Variations in Error Coverage Estimation
Karlsson, Johan
different parts of the workload code to be executed different number of times. By using the results from in the workload input when estimating error detection coverage using fault injection are investigated. Results sequence based on results from fault injection experiments with another input sequence is presented
Data aware, Low cost Error correction for Wireless Sensor Networks
California at San Diego, University of
Data aware, Low cost Error correction for Wireless Sensor Networks Shoubhik Mukhopadhyay, Debashis challenges in adoption and deployment of wireless networked sensing applications is ensuring reliable sensor of such applications. A wireless sensor network is inherently vulnerable to different sources of unreliability
Error Minimization Methods in Biproportional Apportionment Federica Ricca Andrea Scozzari
Serafini, Paolo
as an alternative to the classical axiomatic approach introduced by Balinski and Demange in 1989. We provide and in the statistical literature. A milestone theoretical setting was given by Balinski and Demange in 1989 [5, 6 a class of methods for Biproportional Apportionment characterized by an "error minimization" approach
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Sambridge, Malcolm
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library
Error Exponent for Discrete Memoryless Multiple-Access Channels
Anastasopoulos, Achilleas
Error Exponent for Discrete Memoryless Multiple-Access Channels by Ali Nazari A dissertation Bayraktar Associate Professor Jussi Keppo #12;c Ali Nazari 2011 All Rights Reserved #12;To my parents. ii Becky Turanski, Nancy Goings, Michele Feldkamp, Ann Pace, Karen Liska and Beth Lawson for efficiently
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming
Waterloo, University of
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming Babak Alipanahi1 automatically picked peaks. IPASS is proposed as a novel integer linear programming (ILP) based assignment assignment method. Although a variety of assignment approaches have been developed, none works well on noisy
Research Article Preschool Speech Error Patterns Predict Articulation
-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological Outcomes in Children With Histories of Speech Sound Disorders Jonathan L. Preston,a,b Margaret Hull disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method
Edinburgh Research Explorer Prevalence and Causes of Prescribing Errors
Hall, Christopher
of Prescribing Errors: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study Cristi´n Ryan1 , Sarah Kingdom, 7 Health Psychology, University of Aberdeen, Aberdeen, United Kingdom, 8 Clinical Pharmacology
Achievable Error Exponents for the Private Fingerprinting Game
Merhav, Neri
Achievable Error Exponents for the Private Fingerprinting Game Anelia Somekh-Baruch and Neri Merhav a forgery of the data while aiming at erasing the fingerprints in order not to be detected. Their action have presented and analyzed a game-theoretic model of private2 fingerprinting systems in the presence
RESOLVE Upgrades for on Line Lattice Error Analysis
Lee, M.; Corbett, J.; White, G.; /SLAC; Zambre, Y.; /Unlisted
2011-08-25T23:59:59.000Z
We have increased the speed and versatility of the orbit analysis process by adding a command file, or 'script' language, to RESOLVE. This command file feature enables us to automate data analysis procedures to detect lattice errors. We describe the RESOLVE command file and present examples of practical applications.
Error Control Based Model Reduction for Parameter Optimization of Elliptic
of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material
Development of an Expert System for Classification of Medical Errors
Kopec, Danny
in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors
Odometry Error Covariance Estimation for Two Wheel Robot Vehicles
Robotics Research Centre Department of Electrical and Computer Systems Engineering Monash University Technical Report MECSE-95-1 1995 ABSTRACT This technical report develops a simple statistical error model of the robot. Other paths can be composed of short segments of constant curvature arcs without great loss
Quantum computing with nearest neighbor interactions and error rates over 1%
David S. Wang; Austin G. Fowler; Lloyd C. L. Hollenberg
2010-09-20T23:59:59.000Z
Large-scale quantum computation will only be achieved if experimentally implementable quantum error correction procedures are devised that can tolerate experimentally achievable error rates. We describe a quantum error correction procedure that requires only a 2-D square lattice of qubits that can interact with their nearest neighbors, yet can tolerate quantum gate error rates over 1%. The precise maximum tolerable error rate depends on the error model, and we calculate values in the range 1.1--1.4% for various physically reasonable models. Even the lowest value represents the highest threshold error rate calculated to date in a geometrically constrained setting, and a 50% improvement over the previous record.
Errors induced in triaxial stress tensor calculations using incorrect lattice parameters
Ruud, C.O. [Pennsylvania State Univ., University Park, PA (United States). Materials Research Lab.; Kozaczek, K.J. [Oak Ridge National Lab., TN (United States)
1994-06-01T23:59:59.000Z
A number of researchers have proposed that for some metallic alloys, an elaborate procedure is necessary in order to improve the accuracy of the measured triaxial stress tensor. Others have been concerned that the uncertainties in establishing the precise zero-stress lattice parameter of an alloyed and/or cold worked engineering metal could cause significantly more error than would result in ignoring the triaxial stress state and assuming the plane stress condition. This paper illustrates the effect of uncertainties in the zero-stress lattice parameters on the calculated triaxial stress state for zero stress powders of three common engineering alloys, i.e., 1010 steel, 304 stainless steel, and 2024 aluminum. Also, errors due to the incorrect lattice spacing in experimental stress analysis are presented for three examples, i.e., a silicon powder, 304 gainless steel cylinder and a diamond. For cases where the plane strain assumption is justified, the uncertainties due to the stress free lattice parameter can be reduced by a simple measurement.
Tracking granules at the Sun's surface and reconstructing velocity fields. II. Error analysis
R. Tkaczuk; M. Rieutord; N. Meunier; T. Roudier
2007-07-13T23:59:59.000Z
The determination of horizontal velocity fields at the solar surface is crucial to understanding the dynamics and magnetism of the convection zone of the sun. These measurements can be done by tracking granules. Tracking granules from ground-based observations, however, suffers from the Earth's atmospheric turbulence, which induces image distortion. The focus of this paper is to evaluate the influence of this noise on the maps of velocity fields. We use the coherent structure tracking algorithm developed recently and apply it to two independent series of images that contain the same solar signal. We first show that a k-\\omega filtering of the times series of images is highly recommended as a pre-processing to decrease the noise, while, in contrast, using destretching should be avoided. We also demonstrate that the lifetime of granules has a strong influence on the error bars of velocities and that a threshold on the lifetime should be imposed to minimize errors. Finally, although solar flow patterns are easily recognizable and image quality is very good, it turns out that a time sampling of two images every 21 s is not frequent enough, since image distortion still pollutes velocity fields at a 30% level on the 2500 km scale, i.e. the scale on which granules start to behave like passive scalars. The coherent structure tracking algorithm is a useful tool for noise control on the measurement of surface horizontal solar velocity fields when at least two independent series are available.
Measured Quantum Fourier Transform of 1024 Qubits on Fiber Optics
Akihisa Tomita; Kazuo Nakamura
2004-01-19T23:59:59.000Z
Quantum Fourier transform (QFT) is a key function to realize quantum computers. A QFT followed by measurement was demonstrated on a simple circuit based on fiber-optics. The QFT was shown to be robust against imperfections in the rotation gate. Error probability was estimated to be 0.01 per qubit, which corresponded to error-free operation on 100 qubits. The error probability can be further reduced by taking the majority of the accumulated results. The reduction of error probability resulted in a successful QFT demonstration on 1024 qubits.
BEAM RELATED SYSTEMATICS IN HIGGS BOSON MASS MEASUREMENT
BEAM RELATED SYSTEMATICS IN HIGGS BOSON MASS MEASUREMENT A.RASPEREZA DESY, Notkestrasse 85, DÂ22607#erential luminosity spectrum measurements and beam energy spread on the precision of the Higgs boson mass measurement possible impact of the beam related systematic errors on the Higgs boson mass measurement is discussed
Measurement Errors in Visual Servoing Ville Kyrki, Danica Kragic and Henrik I Christensen
Kragic, Danica
hasvereceived a considerable amount of attention from the robotics community. Particularly, the effect of camera of the system to perform tasks for a moving target. o c s Estimation Pose *cT oT Strategy Servoing ve Strategy
Error analysis of pose measurement from sonic sensors without using speed of sound information
Lai, Chih-Chien
1999-01-01T23:59:59.000Z
and Microphones for Testing Mike Box Location. 60 4. 9 Initial Positions of Microphones. 64 4. 10 Information for Testing Effect of Distance between Transmitters on System. . 66 4. 11 Speed of Sound in Various Substances. . . 69 4. 12 Information for Testing... Transmitter c range from mike a ange frotn mike b dg, plane ab normal vector n of pl ab Microphone a poltlt 0 Microphone b Figure 3. 1: Diagram of Generation of Plane Equation. Table 3. 1: Defirdtion of Variables. tac, tb =? soundtimeof...
Ultrasonic thickness measurements on corroded steel members: a statistical analysis of error
Konen, Keith Forman
1999-01-01T23:59:59.000Z
of the Journal of Structural Engineering, ASCE. This study is the first phase of a joint industry project (JIP) that is funded by the Mineral Management Service of the Department of the Interior, Shell Deepwater Development, Inc. , and Mobil Technology Company... to the numbering system used in the 1989 JIP. Note that not all members were used in this particular study. TABLE 5. 1. Description of specimens Member 10 15 16 Diameter (in) 12. 75 12. 50 12. 75 20. 00 16. 00 14. 00 14. 00 Wall Thickness (in) 0...
Application of the HWVP measurement error model and feed test algorithms to pilot scale feed testing
Adams, T.L.
1996-03-01T23:59:59.000Z
The purpose of the feed preparation subsystem in the Hanford Waste Vitrification Plant (HWVP) is to provide, for control of the properties of the slurry that are sent to the melter. The slurry properties are adjusted so that two classes of constraints are satisfied. Processability constraints guarantee that the process conditions required by the melter can be obtained. For example, there are processability constraints associated with electrical conductivity and viscosity. Acceptability constraints guarantee that the processed glass can be safely stored in a repository. An example of an acceptability constraint is the durability of the product glass. The primary control focus for satisfying both processability and acceptability constraints is the composition of the slurry. The primary mechanism for adjusting the composition of the slurry is mixing the waste slurry with frit of known composition. Spent frit from canister decontamination is also recycled by adding it to the melter feed. A number of processes in addition to mixing are used to condition the waste slurry prior to melting, including evaporation and the addition of formic acid. These processes also have an effect on the feed composition.
Automated suppression of errors in LTP-II slope measurements with x-ray optics
Ali, Zulfiqar
2012-01-01T23:59:59.000Z
precise reflective X-ray optics,” Nucl. Instrum. and Methods70 (2001). [2] P. Z. Takacs, “X- ray optics metrology,” in [Handbook of Optics], 3rd ed. , Vol. V, M. Bass, Ed. ,
Conjoint Degradation Model of Disablement for Survival and Longitudinal Data Measured with Errors
SPb. Math. Society Preprint 200302 30 Jul 2003 Conjoint Degradation Model of Disablement the semiparametric analysis of several new degradation and failure time regression models without and with time be applied in studies of longevity, aging and degradation in survival analysis, biostatistics, epidemiology
A posteriori error estimates for elliptic problems with Dirac measure terms in weighted spaces
Morin, Pedro
by a point charge, the acoustic monopoles or pollutant transport and degradation in an aquatic media where, due to the different scales involved, the pollution source is modeled as supported on a single point
Automated suppression of errors in LTP-II slope measurements with x-ray optics
Ali, Zulfiqar
2012-01-01T23:59:59.000Z
Helmholtz Zentrum Berlin (HZB)/BESSY-II (Germany) 4-7 and atan approach used with the HBZ/BESSY-II NOM instrument. 50 4.
Estimation of the linear-plateau segmented regression model in the presence of measurement error
Grimshaw, Scott D.
1985-01-01T23:59:59.000Z
/c?) 4(~/o ) I (2-6) where 4(') is the standard normal density. Hence, letting = /m(YW)/o?, (2. 5) can be written as V [1 f fx(t) (( ) dz dt + f? j fx(t) 4( ) dz dt] m As the number of repeated observations is increased, 1im P [misclassif ication...] = / [lim C'(v )] f (t) dt + P [lim m(-v ) ] f (t) dt m x m x m~ by Lease B. l, = 0, since lim @(v ) = @(- ) for t & Y , m lim @(-v ) = @(~) for t & Y m Therefore, in the limit, the probability of misclassification is zero. When the join point, Y...
Buser, Michael Dean
2004-09-30T23:59:59.000Z
.................................................................... 195 Estimating PSD Characteristics Based on EPA?s 1996 AP-42 List of Emission Factors.................................................................................199 SUMMARY AND CONCLUSIONS... electron microscope photograph of cotton gin exhaust particles. ..........................................................................................................94 Figure 16. The EPA ideal PM10 and PM2.5 sampler penetration curves overlaid...
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power AdministrationField Campaign:INEA :Work4/11ComputationalEdNERSC:Effectand NMR spectra |
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)
2010-08-24T23:59:59.000Z
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
On the Fourier Transform Approach to Quantum Error Control
Hari Dilip Kumar
2012-08-24T23:59:59.000Z
Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01T23:59:59.000Z
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
On the efficiency of nondegenerate quantum error correction codes for Pauli channels
Gunnar Bjork; Jonas Almlof; Isabel Sainz
2009-05-19T23:59:59.000Z
We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.
Scaling behavior of discretization errors in renormalization and improvement constants
Bhattacharya, T; Lee, W; Sharpe, S R; Bhattacharya, Tanmoy; Gupta, Rajan; Lee, Weonjong; Sharpe, Stephen R.
2006-01-01T23:59:59.000Z
Non-perturbative results for improvement and renormalization constants needed for on-shell and off-shell O(a) improvement of bilinear operators composed of Wilson fermions are presented. The calculations have been done in the quenched approximation at beta=6.0, 6.2 and 6.4. To quantify residual discretization errors we compare our data with results from other non-perturbative calculations and with one-loop perturbation theory.
Error message recording and reporting in the SLC control system
Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.
1985-04-01T23:59:59.000Z
Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.
Error message recording and reporting in the SLC control system
Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.
1985-10-01T23:59:59.000Z
Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.
From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring
Lombardi, Marcie L.
2012-03-01T23:59:59.000Z
Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at the Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.
Measurements of Faint Supernovae
Robert A. Schommer; N. B. Suntzeff; R. C. Smith
1999-09-04T23:59:59.000Z
We summarize the current status of cosmological measurements using SNe Ia. Searches to an average depth of z~0.5 have found approximately 100 SNe Ia to date, and measurements of their light curves and peak magnitudes find these objects to be about 0.25mag fainter than predictions for an empty universe. These measurements imply low values for Omega_M and a positive cosmological constant, with high statistical significance. Searches out to z~1.0-1.2 for SNe Ia (peak magnitudes of I~24.5) will greatly aid in confirming this result, or demonstrate the existence of systematic errors. Multi-epoch spectra of SNe Ia at z~0.5 are needed to constrain possible evolutionary effects. I band searches should be able to find SNe Ia out to z~2. We discuss some simulations of deep searches and discovery statistics at several redshifts.
Estimating the error in simulation prediction over the design space
Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)
2003-01-01T23:59:59.000Z
This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29T23:59:59.000Z
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
Assessing the capabilities of patternshop measurement systems
Peters, F.E.; Voigt, R.C.
1995-12-01T23:59:59.000Z
Casting customers continue to demand tighter dimensional tolerances for casting features. The foundry then places demands on the patternshop to produce more accurate patterns. Control of all sources of dimensional variability, including measurement system variability in the foundry and patternshop, is important to insure casting accuracy. Sources of dimensional casting errors will be reviewed, focusing on the importance of accurate patterns. The foundry and patternshop together must work within the tolerance limits established by the customer. In light of contemporary pattern tolerances, the patternshop must review its current measurement methods. The measurement instrument must have sufficient resolution to detect part variability. In addition, the measurement equipment must be used consistently by all patternmakers to insure adequacy of the measurement system. Without these precautions, measurement error can significantly contribute to overall pattern variability. Simple robust methods to check the adequacy of pattern measurement systems are presented. These tests will determine the variability that is contributed by the measurement equipment and by the operators. Steps to control measurement variability once it has been identified are also provided. Measurement system errors for various types of measurement equipment are compared to the allowable pattern tolerances, that are established together by the foundry and patternshop.
-- Auto Tuning; Measurement Weights; Power System State Estimation; Random Error Variances. I is not always true. M is used for estimating measurement e1 Abstract--This paper describes an approach for choosing and updating measurement weights used
California at Santa Barbara, University of
error of 19o . The effects of bearing errors on total velocity vector estimates were evaluated usingEvaluating radial component current measurements from CODAR high frequency radars and moored of the moorings carried vector measuring current meters (VMCM's), the ninth an upward-looking acoustic Doppler
Absolute beam emittance measurements at RHIC using ionization profile monitors
Minty, M. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Connolly, R [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Liu, C. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Summers, T. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Tepikian, S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.
2014-08-15T23:59:59.000Z
In the past, comparisons between emittance measurements obtained using ionization profile monitors, Vernier scans (using as input the measured rates from the zero degree counters, or ZDCs), the polarimeters and the Schottky detectors evidenced significant variations of up to 100%. In this report we present studies of the RHIC ionization profile monitors (IPMs). After identifying and correcting for two systematic instrumental errors in the beam size measurements, we present experimental results showing that the remaining dominant error in beam emittance measurements at RHIC using the IPMs was imprecise knowledge of the local beta functions. After removal of the systematic errors and implementation of measured beta functions, precise emittance measurements result. Also, consistency between the emittances measured by the IPMs and those derived from the ZDCs was demonstrated.
Recompile if your codes run into MPICH error after the maintenance...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Recompile if your codes run into MPICH errors after the maintenance on 6252014 Recompile if your codes run into MPICH error after the maintenance on 6252014 June 27, 2014 (0...
Design techniques for graph-based error-correcting codes and their applications
Lan, Ching Fu
2006-04-12T23:59:59.000Z
-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...
V-109: Google Chrome WebKit Type Confusion Error Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
9: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code V-109: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute...
Cognitive analysis of students' errors and misconceptions in variables, equations, and functions
Li, Xiaobao
2009-05-15T23:59:59.000Z
such issues, three basic algebra concepts - variable, equation, and function – are used to analyze students’ errors, possible buggy algorithms, and the conceptual basis of these errors: misconceptions. Through the research on these three basic concepts...
Scattering effects at near-wall flow measurements using Doppler global velocimetry
Fischer, Andreas; Haufe, Daniel; Buettner, Lars; Czarske, Juergen
2011-07-20T23:59:59.000Z
Doppler global velocimetry (DGV) is considered to be a useful optical measurement tool for acquiring flow velocity fields. Often near-wall measurements are required, which is still challenging due to errors resulting from background scattering and multiple-particle scattering. Since the magnitudes of both errors are unknown so far, they are investigated by scattering simulations and experiments. Multiple-particle scattering mainly causes a stochastic error, which can be reduced by averaging. Contrary to this, background scattering results in a relative systematic error, which is directly proportional to the ratio of the background scattered light power to the total scattered light power. After applying a correction method and optimizing the measurement arrangement, a subsonic flat plate boundary layer was successfully measured achieving a minimum wall distance of 100 {mu}m with a maximum relative error of 6%. The investigations reveal the current capabilities and perspectives of DGV for near-wall measurements.
Shared Dosimetry Error in Epidemiological Dose-Response Analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23T23:59:59.000Z
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ?6¼0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the MayakWorker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
Using Graphs for Fast Error Term Approximation of Time-varying Datasets
Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I
2003-02-27T23:59:59.000Z
We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.
T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service
Broader source: Energy.gov [DOE]
A remote user can cause the backend server to remain in an error state until the retry timeout expires.
McReynolds, W.L. (Bonneville Power Administration, Vancouver, WA (US)); Badley, D.E. (N.W. Power Pool, Coordinating Office, Portland, OR (US))
1991-08-01T23:59:59.000Z
This paper describes an automatic generation control (AGC) system that simultaneously reduces time error and accumulated inadvertent interchange energy in interconnected power system. This method is automatic time error and accumulated inadvertent interchange reduction (AIIR). With this method control areas help correct the system time error when doing so also tends to correct accumulated inadvertent interchange. Thus in one step accumulated inadvertent interchange and system time error are corrected.
Shota Kino; Taiki Nii; Holger F. Hofmann
2015-02-23T23:59:59.000Z
Joint measurements of non-commuting observables are characterized by unavoidable measurement uncertainties that can be described in terms of the error statistics for input states with well-defined values for the target observables. However, a complete characterization of measurement errors must include the correlations between the errors of the two observables. Here, we show that these correlations appear in the experimentally observable measurement statistics obtained by performing the joint measurement on maximally entangled pairs. For two-level systems, the results indicate that quantum theory requires imaginary correlations between the measurement errors of X and Y since these correlations are represented by the operator product XY=iZ in the measurement operators. Our analysis thus reveals a directly observable consequence of non-commutativity in the statistics of quantum measurements.
Design error diagnosis and correction in digital circuits
Nayak, Debashis
1998-01-01T23:59:59.000Z
, each primary output would impose a con- straint on the on-set and off-set. These constraints should be combined together to derive the final on-set and off-set of the new function. Proposition 2: [9, 18, 17] Let i be the index of the primary outputs... to this equation are deleted. The work in [17] is also based on Boolean comparisons and applies to multiple errors. Overall, their method does not guarantee a solution. Test-vector simulation methods proposed for the DEDC problem include [20, 22, 26]. In [20...
Optimum decoding of TCM in the presence of phase errors
Han, Jae Choong
1990-01-01T23:59:59.000Z
discussed. Our approach is to assume that intersymbol interference has been effectively removed by the equalizer while the phase tracking scheme has partially removed the phase jitter, in which case the output of the equalizer will have a slowly varying.... The DAL [I] used the decision at the output ol' the Viterbi decoder to demodulate the local c&arrier. The performance degradation of coded 8-PSK when disturbed by recovered carrier phase error and jitter is investigatecl in i'Gi, in which simulation...
Effects of color coding on keying time and errors
Wooldridge, Brenda Gail
1983-01-01T23:59:59.000Z
were to determine the effects if any oi' color coding upon the error rate and location time of special func- tion keys on a computer keyboard. An ACT-YA CRT keyboard interfaced with a Kromemco microcomputer was used. There were 84 high schoool... to comnunicate with more and more computer-like devices. The most common computer/human interface is the terminal, consisting of a display screen, and keyboard. The format and layout on the display screen of computer-generated information is generally...
Common Errors and Innovative Solutions Transcript | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't Your Destiny: Theof"Wave the WhiteNational| Department ofCommittee Report forCommon Errors
Regression analysis with longitudinal measurements
Ryu, Duchwan
2005-08-29T23:59:59.000Z
, in the cardiotoxic effects of doxorubicin chemotherapy for the treat- ment of acute lymphoblastic leukemia in childhood (Lipsitz et al., 2002; Fitzmaurice et al., 2003), the design points are not pre-defined but determined by the preceding response. This outcome...-dependent feature of measurements makes biased estimation of regression line. As noticed by Lipsitz et al. (2002); Fitzmaurice et al. (2003), even the least square estimates will be biased, which does not require the distributional assumption of response error...
Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai
Sahai, Anant
Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai HP Labs, Palo Alto EECS, UC Berkeley ISIT 2008 Chang (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 1 (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 2 / 14 #12;Stabilizing an unstable
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing Systems
Pedram, Massoud
. INTRODUCTION Soft error resiliency has become a major concern for modern computing systems as CMOS technology systems [8, 9]. Although it is impossible to entirely eliminate spontaneous soft errors, they canAn Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing
Digication Error Message:"Your username is already in use by another account."
Barrash, Warren
Digication Error Message:"Your username is already in use by another account." You may need you have one). If you receive the error message below, here's how to log into your Digication account. (For example, if the error message appeared when using your employee account, switch to your employee
Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI
Hadjicostis, Christoforos
Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI Dynamic Systems encoded form and allow error detection and correction to be performed through concurrent parity checks (i that allows parity checks to capture the evolution of errors in the system and, based on non-concurrent parity
Error Analysis of Ia Supernova and Query on Cosmic Dark Energy
Qiuhe Peng; Yiming Hu; Kun Wang; Yu Liang
2012-01-16T23:59:59.000Z
Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than $0.55^m$, so we can't decide whether the universe is accelerating expansion or not.
The robustness of magic state distillation against errors in Clifford gates
Jochym-O'Connor, Tomas; Helou, Bassam; Laflamme, Raymond
2012-01-01T23:59:59.000Z
Quantum error correction and fault-tolerance have provided the possibility for large scale quantum computations without a detrimental loss of quantum information. A very natural class of gates for fault-tolerant quantum computation is the Clifford gate set and as such their usefulness for universal quantum computation is of great interest. Clifford group gates augmented by magic state preparation give the possibility of simulating universal quantum computation. However, experimentally one cannot expect to perfectly prepare magic states. Nonetheless, it has been shown that by repeatedly applying operations from the Clifford group and measurements in the Pauli basis, the fidelity of noisy prepared magic states can be increased arbitrarily close to a pure magic state [1]. We investigate the robustness of magic state distillation to perturbations of the initial states to arbitrary locations in the Bloch sphere due to noise. Additionally, we consider a depolarizing noise model on the quantum gates in the decoding ...
Information-preserving structures: A general framework for quantum zero-error information
Blume-Kohout, Robin [Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5 (Canada); Ng, Hui Khoon [Institute for Quantum Information, California Institute of Technology, Pasadena, California 91125 (United States); Poulin, David [Department de Physique, Universite de Sherbrooke, Quebec J1K 2R1 (Canada); Viola, Lorenza [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, New Hampshire 03755 (United States)
2010-12-15T23:59:59.000Z
Quantum systems carry information. Quantum theory supports at least two distinct kinds of information (classical and quantum), and a variety of different ways to encode and preserve information in physical systems. A system's ability to carry information is constrained and defined by the noise in its dynamics. This paper introduces an operational framework, using information-preserving structures, to classify all the kinds of information that can be perfectly (i.e., with zero error) preserved by quantum dynamics. We prove that every perfectly preserved code has the same structure as a matrix algebra, and that preserved information can always be corrected. We also classify distinct operational criteria for preservation (e.g., 'noiseless','unitarily correctible', etc.) and introduce two natural criteria for measurement-stabilized and unconditionally preserved codes. Finally, for several of these operational criteria, we present efficient (polynomial in the state-space dimension) algorithms to find all of a channel's information-preserving structures.
Gardner, Christopher
2008-01-01T23:59:59.000Z
A statistical description and model of individual healthcare expenditures in the US has been developed for measuring value in healthcare. We find evidence that healthcare expenditures are quantifiable as an infusion-diffusion process, which can be thought of intuitively as a steady change in the intensity of treatment superimposed on a random process reflecting variations in the efficiency and effectiveness of treatment. The arithmetic mean represents the net average annual cost of healthcare; and when multiplied by the arithmetic standard deviation, which represents the effective risk, the result is a measure of healthcare cost control. Policymakers, providers, payors, or patients that decrease these parameters are generating value in healthcare. The model has an average absolute prediction error of approximately 10-12% across the range of expenditures which spans 6 orders of magnitude over a nearly 10-year period. For the top 1% of the population with the largest expenditures, representing 20%-30% of total ...
Aperiodic dynamical decoupling sequences in presence of pulse errors
Zhi-Hui Wang; V. V. Dobrovitski
2011-01-12T23:59:59.000Z
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstrate that by applying the control pulses along different directions, the performance of QDD protocols can be noticeably improved, and explain the reason of such an improvement. Our results can be useful for future implementations of the aperiodic decoupling protocols, and for better understanding of the impact of errors on quantum control of spins.
Pollutant measurements Nils Mole, Finn Palmgren & Hao Zhang
Mole, Nils
we deal with measurement techniques and strategies appropriate to major pollutants in both air and water, and also with the effects of unavoidable measurement errors. Pollutant Measurements in Air The atmosphere is an important medium for trans- port and transformation of pollutants. Air pollutants can
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
Wild, Oliver; Prather, Michael J
2006-01-01T23:59:59.000Z
TRACE-P measurements representative of the western Pacificconverge on a value representative of the region. 4 of 14
Uncertainty Analysis Technique for OMEGA Dante Measurements
May, M J; Widmann, K; Sorce, C; Park, H; Schneider, M
2010-05-07T23:59:59.000Z
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22T23:59:59.000Z
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Statistical evaluation of design-error related accidents
Ott, K.O.; Marchaterre, J.F.
1980-01-01T23:59:59.000Z
In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy.
Statistical Error analysis of Nucleon-Nucleon phenomenological potentials
R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola
2014-06-10T23:59:59.000Z
Nucleon-Nucleon potentials are commonplace in nuclear physics and are determined from a finite number of experimental data with limited precision sampling the scattering process. We study the statistical assumptions implicit in the standard least squares fitting procedure and apply, along with more conventional tests, a tail sensitive quantile-quantile test as a simple and confident tool to verify the normality of residuals. We show that the fulfilment of normality tests is linked to a judicious and consistent selection of a nucleon-nucleon database. These considerations prove crucial to a proper statistical error analysis and uncertainty propagation. We illustrate these issues by analyzing about 8000 proton-proton and neutron-proton scattering published data. This enables the construction of potentials meeting all statistical requirements necessary for statistical uncertainty estimates in nuclear structure calculations.
Source Coding with Mismatched Distortion Measures
Niesen, Urs; Wornell, Gregory
2008-01-01T23:59:59.000Z
We consider the problem of lossy source coding with a mismatched distortion measure. That is, we investigate what distortion guarantees can be made with respect to distortion measure $\\tilde{\\rho}$, for a source code designed such that it achieves distortion less than $D$ with respect to distortion measure $\\rho$. We find a single-letter characterization of this mismatch distortion and study properties of this quantity. These results give insight into the robustness of lossy source coding with respect to modeling errors in the distortion measure. They also provide guidelines on how to choose a good tractable approximation of an intractable distortion measure.
Kim, Leonard H.; Zhang Miao; Howell, Roger W.; Yue, Ning J.; Khan, Atif J. [Department of Radiation Oncology, University of Medicine and Dentistry of New Jersey: Robert Wood Johnson Medical School and Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Radiology, University of Medicine and Dentistry of New Jersey: New Jersey Medical School, Newark, New Jersey 07103 (United States); Department of Radiation Oncology, University of Medicine and Dentistry of New Jersey: Robert Wood Johnson Medical School and Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States)
2013-01-15T23:59:59.000Z
Purpose: Recent recommendations by the American Association of Physicists in Medicine Task Group 186 emphasize the importance of understanding material properties and their effect on inhomogeneity-corrected dose calculation for brachytherapy. Radiographic contrast is normally injected into breast brachytherapy balloons. In this study, the authors independently estimate properties of contrast solution that were expected to be incorrectly specified in a commercial brachytherapy dose calculation algorithm. Methods: The mass density and atomic weight fractions of a clinical formulation of radiographic contrast solution were determined using manufacturers' data. The mass density was verified through measurement and compared with the density obtained by the treatment planning system's CT calibration. The atomic weight fractions were used to determine the photon interaction cross section of the contrast solution for a commercial high-dose-rate (HDR) brachytherapy source and compared with that of muscle. Results: The density of contrast solution was 10% less than that obtained from the CT calibration. The cross section of the contrast solution for the HDR source was 1.2% greater than that of muscle. Both errors could be addressed by overriding the density of the contrast solution in the treatment planning system. Conclusions: The authors estimate the error in mass density and cross section parameters used by a commercial brachytherapy dose calculation algorithm for radiographic contrast used in a clinical breast brachytherapy practice. This approach is adaptable to other clinics seeking to evaluate dose calculation errors and determine appropriate density override values if desired.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01T23:59:59.000Z
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Hinckley, C.M.
1994-01-01T23:59:59.000Z
The performance of Japanese products in the marketplace points to the dominant role of quality in product competition. Our focus is motivated by the tremendous pressure to improve conformance quality by reducing defects to previously unimaginable limits in the range of 1 to 10 parts per million. Toward this end, we have developed a new model of conformance quality that addresses each of the three principle defect sources: (1) Variation, (2) Human Error, and (3) Complexity. Although the role of variation in conformance quality is well documented, errors occur so infrequently that their significance is not well known. We have shown that statistical methods are not useful in characterizing and controlling errors, the most common source of defects. Excessive complexity is also a root source of defects, since it increases errors and variation defects. A missing link in the defining a global model has been the lack of a sound correlation between complexity and defects. We have used Design for Assembly (DFA) methods to quantify assembly complexity and have shown that assembly times can be described in terms of the Pareto distribution in a clear exception to the Central Limit Theorem. Within individual companies we have found defects to be highly correlated with DFA measures of complexity in broad studies covering tens of millions of assembly operations. Applying the global concepts, we predicted that Motorola`s Six Sigma method would only reduce defects by roughly a factor of two rather than orders of magnitude, a prediction confirmed by Motorola`s data. We have also shown that the potential defects rates of product concepts can be compared in the earliest stages of development. The global Conformance Quality Model has demonstrated that the best strategy for improvement depends upon the quality control strengths and weaknesses.
Heid, Matthias; Luetkenhaus, Norbert [Quantum Information Theory Group, Institut fuer theoretische Physik I and Max-Planck Research Group, Institute of Optics, Information and Photonics, Universitaet Erlangen-Nuernberg, Staudtstrasse 7/B2, 91058 Erlangen (Germany)
2006-05-15T23:59:59.000Z
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.
Jeff Phillips; Changhu Xing; Colby Jensen; Heng Ban1
2011-07-01T23:59:59.000Z
A technique adapted from the guarded-comparative-longitudinal heat flow method was selected for the measurement of the thermal conductivity of a nuclear fuel compact over a temperature range characteristic of its usage. This technique fulfills the requirement for non-destructive measurement of the composite compact. Although numerous measurement systems have been created based on the guarded comparative method, comprehensive systematic (bias) and measurement (precision) uncertainty associated with this technique have not been fully analyzed. In addition to the geometric effect in the bias error, which has been analyzed previously, this paper studies the working condition which is another potential error source. Using finite element analysis, this study showed the effect of these two types of error sources in the thermal conductivity measurement process and the limitations in the design selection of various parameters by considering their effect on the precision error. The results and conclusions provide valuable reference for designing and operating an experimental measurement system using this technique.
Microsoft PowerPoint - Reducing Solar Resource Error Through...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Degradation (0.5 - 1 %) Transposition To Plane of Array (0.5 - 2%) Energy Simulation & Plant Losses (3 - 5 %) Solar Resource Uncertainty (Measurement, IA Variability, POR,...
Contagious error sources would need time travel to prevent quantum computation
Gil Kalai; Greg Kuperberg
2015-05-07T23:59:59.000Z
We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.
Determination of the star tracker-inertial measurement unit misalignment
Shearer, Milo Edward
1967-01-01T23:59:59.000Z
navigation system should have the following characteristics; l. The instrumentation is desi, ~cd to measure force through the mechanical implemontation of iiewtcn's second law of motion. 2. The system's accuracy is limited by the degree of perf... 36 navigation possible, since the inertial instrumented' otherwise, would. have aocuracy requirements that are (58) 36 Ibidem pe 453 ' 28 now unattainable. The two principal aces'3 eros etcr errors ere the bis. s error, S K? and tbe scale...
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)
1993-01-01T23:59:59.000Z
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Kaeli, David R.
A Field Analysis of System-level Effects of Soft Errors Occurring in Microprocessors used, will generate sufficient charge to cause a soft error. In the absence of error correction schemes, the system rates for unprotected systems [8]. Soft errors are emerging as a significant obstacle to increasing
Kaeli, David R.
A Field Failure Analysis of Microprocessors used in Information Systems Abstract Soft errors due from error logs and error traces of the microprocessors collected from systems in the field. Soft focus on soft error rate (SER) estimation of microprocessors used in information systems by analyzing
Pitch Error and Shear Web Disbond Detection on Wind Turbine Blades...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
American Institute of Aeronautics and Astronautics 1 Pitch Error and Shear Web Disbond Detection on Wind Turbine Blades for Offshore Structural Health and Prognostics Management...
Accounting for model error due to unresolved scales within ensemble Kalman filtering
Lewis Mitchell; Alberto Carrassi
2014-09-02T23:59:59.000Z
We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through the proposed model error treatments, and that both methods require far less parameter tuning than the standard approach. Furthermore, the proposed approach is simple to implement within a pre-existing ensemble based scheme. The general implications for the use of the proposed approach in the framework of square-root filters such as the ETKF are also discussed.
V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service...
Broader source: Energy.gov (indexed) [DOE]
the target resolver to crash IMPACT: Triggering this defect will cause the affected server to exit with an error, denying service to recursive DNS clients that use that...
Choose and choose again: appearance-reality errors, pragmatics and logical ability
Deák, Gedeon O; Enright, Brian
2006-01-01T23:59:59.000Z
Development, 62, 753–766. Speer, J.R. (1984). Two practicalolder still make errors (e.g. Speer, 1984), some preschool
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-01-01T23:59:59.000Z
Development, 62, 753-766. Speer, J. R. (1984). Two practicalolder still make errors (e.g. , Speer, 1984), some preschool
The Importance of Run-time Error Detection Glenn R. Luecke 1
Luecke, Glenn R.
Iowa State University's High Performance Computing Group, Iowa State University, Ames, Iowa 50011, USA State University's High Performance Computing Group for evaluating run-time error detection capabilities
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
West, Randy
2011-01-01T23:59:59.000Z
2.1 Noisy Channel Models in SMT . . . . . . . .esl errors using phrasal smt techniques. In Proceedings ofet al. (2006) use phrasal SMT techniques to identify and
A surrogate-based uncertainty quantification with quantifiable errors
Bang, Y.; Abdel-Khalik, H. S. [North Carolina State Univ., Raleigh, NC 27695 (United States)
2012-07-01T23:59:59.000Z
Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)
Plasma dynamics and a significant error of macroscopic averaging
Marek A. Szalek
2005-05-22T23:59:59.000Z
The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15T23:59:59.000Z
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Aperiodic dynamical decoupling sequences in presence of pulse errors
Wang, Zhi-Hui
2011-01-01T23:59:59.000Z
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstra...
Error analysis of nuclear forces and effective interactions
R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola
2014-09-04T23:59:59.000Z
The Nucleon-Nucleon interaction is the starting point for ab initio Nuclear Structure and Nuclear reactions calculations. Those are effectively carried out via effective interactions fitting scattering data up to a maximal center of mass momentum. However, NN interactions are subjected to statistical and systematic uncertainties which are expected to propagate and have some impact on the predictive power and accuracy of theoretical calculations, regardless on the numerical accuracy of the method used to solve the many body problem. We stress the necessary conditions required for a correct and self-consistent statistical interpretation of the discrepancies between theory and experiment which enable a subsequent statistical error propagation and correlation analysis. We comprehensively discuss an stringent and recently proposed tail-sensitive normality test and provide a simple recipe to implement it. As an application, we analyze the deduced uncertainties and correlations of effective interactions in terms of Moshinsky-Skyrme parameters and effective field theory counterterms as derived from the bare NN potential containing One-Pion-Exchange and Chiral Two-Pion-Exchange interactions inferred from scattering data.
Improve Industrial Temperature Measurement Precision for Cost-Effective Energy Usage
Lewis, C. W.
setting, errors between any two measurement instruments of 0.1 OF can result in an error of 4 Megawatts of energy! You do not want to have too many Megawatts disappearing from a nuclear power station before you start to do something about it. FIGURE...
Mapping GPS positional errors with spatial linear mixed models
Militino, A. F.
Nowadays, GPS receivers are very reliable because of their good accuracy and precision; however, uncertainty is also inherent in geospatial data. Quality of GPS measurements can be influenced by atmospheric disturbances, ...
Adam Miranowicz; Sahin K. Ozdemir; Jiri Bajer; Go Yusa; Nobuyuki Imoto; Yoshiro Hirayama; Franco Nori
2014-10-09T23:59:59.000Z
We discuss methods of quantum state tomography for solid-state systems with a large nuclear spin $I=3/2$ in nanometer-scale semiconductors devices based on a quantum well. Due to quadrupolar interactions, the Zeeman levels of these nuclear-spin devices become nonequidistant, forming a controllable four-level quantum system (known as quartit or ququart). The occupation of these levels can be selectively and coherently manipulated by multiphoton transitions using the techniques of nuclear magnetic resonance (NMR) [Yusa et al., Nature (London) 434, 101 (2005)]. These methods are based on an unconventional approach to NMR, where the longitudinal magnetization $M_z$ is directly measured. This is in contrast to the standard NMR experiments and tomographic methods, where the transverse magnetization $M_{xy}$ is detected. The robustness against errors in the measured data is analyzed by using condition numbers. We propose several methods with optimized sets of rotations. The optimization is applied to decrease the number of NMR readouts and to improve the robustness against errors, as quantified by condition numbers. An example of state reconstruction, using Monte Carlo methods, is presented. Tomographic methods for quadrupolar nuclei with higher-spin numbers (including $I=7/2$) are also described.
Problems with Accurate Atomic Lfetime Measurements of Multiply Charged Ions
Trabert, E
2009-02-19T23:59:59.000Z
A number of recent atomic lifetime measurements on multiply charged ions have reported uncertainties lower than 1%. Such a level of accuracy challenges theory, which is a good thing. However, a few lessons learned from earlier precision lifetime measurements on atoms and singly charged ions suggest to remain cautious about the systematic errors of experimental techniques.
Measuring the dark matter equation of state
Serra, Ana Laura
2011-01-01T23:59:59.000Z
The nature of the dominant component of galaxies and clusters remains unknown. While the astrophysics comunity supports the cold dark matter (CDM) paradigm as a clue factor in the current cosmological model, no direct CDM detections have been performed. Faber and Visser 2006 have suggested a simple method for measuring the dark matter equation of state. By combining kinematical and gravitational lensing data it is possible to test the widely adopted assumption of pressureless dark matter. According to this formalism, we have measured the dark matter equation of state for first time using improved techniques. We have found that the value of the equation of state parameter is consistent with pressureless dark matter within the errors. Nevertheless the measured value is lower than expected. This fact follows from the well known differences between the masses determinated by lensing and kinematical methods. We have tested our techniques using simulations and we have also analyzed possible sources of errors that c...
Integrated Control-Path Design and Error Recovery in the Synthesis of Digital
Chakrabarty, Krishnendu
11 Integrated Control-Path Design and Error Recovery in the Synthesis of Digital Microfluidic Lab-on-Chip YANG ZHAO, TAO XU, and KRISHNENDU CHAKRABARTY Duke University Recent advances in digital microfluidics that incorporates control paths and an error- recovery mechanism in the design of a digital microfluidic lab
Observability-aware Directed Test Generation for Soft Errors and Crosstalk Faults
Mishra, Prabhat
. In modern System- on-Chip (SoC) design methodology, it is found that regions where errors are detectedObservability-aware Directed Test Generation for Soft Errors and Crosstalk Faults Kanad Basu Syst emerged as an important component of any chip design methodology to detect both functional and electrical
Maintaining Standards: Differences between the Standard Deviation and Standard Error, and
California at Santa Cruz, University of
Maintaining Standards: Differences between the Standard Deviation and Standard Error, and When to Use Each David L Streiner, PhD1 Many people confuse the standard deviation (SD) and the standard error of the mean (SE) and are unsure which, if either, to use in presenting data in graphical or tabular form
Minimum Bit Error Probability of Large Randomly Spread MCCDMA Systems in
MÃ¼ller, Ralf R.
Minimum Bit Error Probability of Large Randomly Spread MCÂCDMA Systems in Multipath Rayleigh Fading, to calculate the bit error probaÂ bility in the large system limit for randomly assigned spreading sequences detecÂ tion with is accurate if the number of users and the spreading factor are large. His calculations
Minimum Bit Error Probability of Large Randomly Spread MC-CDMA Systems in
MÃ¼ller, Ralf R.
Minimum Bit Error Probability of Large Randomly Spread MC-CDMA Systems in Multipath Rayleigh Fading, to calculate the bit error proba- bility in the large system limit for randomly assigned spreading sequences detec- tion with is accurate if the number of users and the spreading factor are large. His calculations
Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes
Kanungo, Aparna
2005-11-01T23:59:59.000Z
Quantum error correcting codes have been introduced to encode the data bits in extra redundant bits in order to accommodate errors and correct them. However, due to the delicate nature of the quantum states or faulty gate operations, there is a...
Drift-magnetohydrodynamical model of error-field penetration in tokamak plasmas
Fitzpatrick, Richard
Drift-magnetohydrodynamical model of error-field penetration in tokamak plasmas A. Cole and R published magnetohydrodynamical MHD model of error-field penetration in tokamak plasmas is extended to take in ohmic tokamak plasmas. © 2006 American Institute of Physics. DOI: 10.1063/1.2178167 I. INTRODUCTION
A Posteriori Error Estimates with Post-Processing for Nonconforming Finite Elements
Schieweck, Friedhelm
that it has the same asymptotic behavior as the energy norm of the real discretization error itself. We show, we propose an a posteriori error estimate in the energy norm which uses as an additive term the \\post in the global energy norm, we demonstrate that the concept of using a conforming approximation
A System for 3D Error Visualization and Assessment of Digital Elevation Models
Gousie, Michael B.
A System for 3D Error Visualization and Assessment of Digital Elevation Models Michael B. Gousie that displays a DEM and possible errors in 3D, along with its associated contour or sparse data and detail. The cutting tool is semi-transparent so that the profile is seen in the context of the 3D surface
Paris-Sud XI, Université de
Network Code Design from Unequal Error Protection Coding: Channel-Aware Receiver Design.iezzi, fabio.graziosi}@univaq.it Abstract-- In this paper, we propose Unequal Error Protection (UEP) coding theory as a viable and flexible method for the design of network codes for multisource multirelay
DysList: An Annotated Resource of Dyslexic Errors Luz Rello,1
of texts written by people with dyslexia. Each of the errors was annotated with a set of characteristics of this kind, especially given the difficulty of finding texts written by people with dyslexia. Keywords: Errors, Dyslexia, Visual, Phonetics, Resource 1. Introduction Dyslexia is a reading and spelling disorder
Embedded packet video transmission over wireless channels using power control and forward error
Granelli, Fabrizio
for implementing packet prioritization based on a non-uniform allocation of the available transmission energy high percentage of transmission errors in the wireless medium and the limited energy of portable energy distribution is jointly employed with error correction schemes in order to achieve optimal non
Database Error Trapping and Prediction Mike West & Robert L. Winkler \\Lambda
West, Mike
, such as electronic components or systems, or components of computer software systems, that are subject to regimes and reliability control being of particular note. Keywords: ERROR DETECTION, ERROR RATES, DATA QUALITY, DATA MAN. Exam ples in industrial quality and reliability control may concern manufactured items
A posteriori error estimates, stopping criteria, and adaptivity for multiphase compositional Darcy derive a posteriori error estimates for the compositional model of multiphase Darcy flow in porous media, consisting of a system of strongly coupled nonlinear unsteady partial differential and algebraic equations
ASC Report No. 45/2012 A Numerical Study of Averaging Error
Melenk, Jens Markus
polynomials of the same polynomial degree as the finite element solution leads to reliability and efficiency], is a widely used method for gauging errors in finite element methods and steering adaptive mesh refinements and M. Tutz A review of stability and error theory for collocation methods applied to linear boundary
Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors
Paris-Sud XI, Université de
Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors Adel Olabi.damak@geomnia.eu Abstract--The use of industrial robots in many fields of industry like prototyping, pre-machining and end errors. Identification methods are presented with experimental validation on a 6 axes industrial robot
A Case for Soft Error Detection and Correction in Computational Chemistry
van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.
2013-09-10T23:59:59.000Z
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Fessler, Jeffrey A.
-ray computed tomography. The effects of the quantization error in forward-projection, back computed tomography (CT) have been proposed to improve image quality and reduce dose [1]. These methodsPERTURBATION-BASED ERROR ANALYSIS OF ITERATIVE IMAGE RECONSTRUCTION ALGORITHM FOR X-RAY COMPUTED
Static Detection of API Error-Handling Bugs via Mining Source Code
Young, R. Michael
Static Detection of API Error-Handling Bugs via Mining Source Code Mithun Acharya and Tao Xie error specifi- cations automatically from software package repositories, without requiring any user inter-procedurally scattered and not always correctly coded by the programmers, manually inferring
Approximate logic circuits for low overhead, non-intrusive concurrent error detection
Mohanram, Kartik
Approximate logic circuits for low overhead, non-intrusive concurrent error detection Mihir R for the synthesis of approximate logic circuits. A low overhead, non-intrusive solution for concurrent error as proposed in this paper. A low overhead, non-intrusive solution for CED based on ap- proximate
TYPOGRAPHICAL AND ORTHOGRAPHICAL SPELLING ERROR Kyongho Min*, William H. Wilson*, Yoo-Jin Moon
Wilson, Bill
-Jin Moon *School of Computer Science and Engineering The University of New South Wales Sydney NSW 2052 of spelling errors such as typographical (Damerau, 1964; Pollock and Zamora, 1983), orthographical (Sterling), and orthographical errors in spontaneous writings of children (Sterling, 1983; Mitton, 1987). 1.2. Approaches
Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)] [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States)] [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada)] [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States)] [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States)] [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)] [Moffitt Cancer Center, Tampa, Florida 33612 (United States)
2013-11-15T23:59:59.000Z
Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.
Out-of-plane ultrasonic velocity measurement
Hall, M.S.; Brodeur, P.H.; Jackson, T.G.
1998-07-14T23:59:59.000Z
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.
Out-of-plane ultrasonic velocity measurement
Hall, Maclin S. (Marietta, GA); Brodeur, Pierre H. (Smyrna, GA); Jackson, Theodore G. (Atlanta, GA)
1998-01-01T23:59:59.000Z
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.
The accuracy of miniature bead thermistors in the measurement of upper air temperature
Thompson, Donald C. (Donald Charles), 1933-
1967-01-01T23:59:59.000Z
A laboratory study was made of the errors of miniature bead thermistors of 5, 10, and 15 mils nominal diameter when used for the measurement of atmospheric temperature. Although the study was primarily concerned with the ...
Accelerator structure bead pull measurement at SLAC
Lewandowski, J R; Miller, R H; Wang, J W
2004-01-01T23:59:59.000Z
Microwave measurement and tuning of accelerator structures are important issues for the current and next generation of high energy physics machines. Application of these measurements both before and after high power processing can reveal information about the structure but may be misinterpreted if measurement conditions are not carefully controlled. For this reason extensive studies to characterize the microwave measurements at have been made at SLAC. For the beadpull a reproducible measurement of less than 1 degree of phase accuracy in total phase drift is needed in order to resolve issues such as phase changes due to structure damage during high power testing. Factors contributing to measurement errors include temperature drift, mechanical vibration, and limitations of measurement equipment such as the network analyzer. Results of this continuing effort will be presented.
Michael Joyce; Bruno Marcos; Thierry Baertschiger
2008-11-26T23:59:59.000Z
The effects of discreteness arising from the use of the N-body method on the accuracy of simulations of cosmological structure formation are not currently well understood. After a discussion of how the relevant discretisation parameters introduced should be extrapolated to recover the Vlasov-Poisson limit, we study numerically, and with analytical methods we have developed recently, the central issue of how finite particle density affects the precision of results. In particular we focus on the power spectrum at wavenumbers around and above the Nyquist wavenumber, in simulations in which the force resolution is taken smaller than the initial interparticle spacing. Using simulations of identical theoretical initial conditions sampled on four different "pre-initial" configurations (three different Bravais lattices, and a glass) we obtain a {\\it lower bound} on the real discreteness error. With the guidance of our analytical results, we establish with confidence that the measured dispersion is not contaminated either by finite box size effects or by subtle numerical effects. Our results show notably that, at wavenumbers {\\it below} the Nyquist wavenumber, the dispersion increases monotonically in time throughout the simulation, while the same is true above the Nyquist wavenumber once non-linearity sets in. For normalizations typical of cosmological simulations, we find lower bounds on errors at the Nyquist wavenumber of order of a percent, and larger above this scale. The only way this error may be reduced below these levels at these scales, and indeed convergence to the physical limit firmly established, is by extrapolation, at fixed values of the other relevant parameters, to the regime in which the mean comoving interparticle distance becomes less than the force smoothing scale.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01T23:59:59.000Z
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Performance and Error Analysis of Knill's Postselection Scheme in a Two-Dimensional Architecture
Ching-Yi Lai; Gerardo Paz; Martin Suchara; Todd A. Brun
2013-05-31T23:59:59.000Z
Knill demonstrated a fault-tolerant quantum computation scheme based on concatenated error-detecting codes and postselection with a simulated error threshold of 3% over the depolarizing channel. %We design a two-dimensional architecture for fault-tolerant quantum computation based on Knill's postselection scheme. We show how to use Knill's postselection scheme in a practical two-dimensional quantum architecture that we designed with the goal to optimize the error correction properties, while satisfying important architectural constraints. In our 2D architecture, one logical qubit is embedded in a tile consisting of $5\\times 5$ physical qubits. The movement of these qubits is modeled as noisy SWAP gates and the only physical operations that are allowed are local one- and two-qubit gates. We evaluate the practical properties of our design, such as its error threshold, and compare it to the concatenated Bacon-Shor code and the concatenated Steane code. Assuming that all gates have the same error rates, we obtain a threshold of $3.06\\times 10^{-4}$ in a local adversarial stochastic noise model, which is the highest known error threshold for concatenated codes in 2D. We also present a Monte Carlo simulation of the 2D architecture with depolarizing noise and we calculate a pseudo-threshold of about 0.1%. With memory error rates one-tenth of the worst gate error rates, the threshold for the adversarial noise model, and the pseudo-threshold over depolarizing noise, are $4.06\\times 10^{-4}$ and 0.2%, respectively. In a hypothetical technology where memory error rates are negligible, these thresholds can be further increased by shrinking the tiles into a $4\\times 4$ layout.
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01T23:59:59.000Z
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
Freeman, J.; Whitmore, J.; Blair, N.; Dobos, A. P.
2014-08-01T23:59:59.000Z
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% for all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.
An Analysis of the Effect of Gaussian Error in Object Recognition
Sarachik, Karen Beth
1994-02-01T23:59:59.000Z
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition ...
Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness
Jong-Kyu Park, Michael J. Schaffer, Robert J. La Haye,Timothy J. Scoville and Jonathan E. Menard
2011-05-16T23:59:59.000Z
Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.
Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities
Auflick, Jack L.
1999-04-21T23:59:59.000Z
Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models AndyNovobilski,Ph.D.
Fernandez, Thomas
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models Andy computfi~gas a methodof data mining,is its intrinsic ability to drive modelselection accordingto a mixedset of criteria. Basedon natural selection, evolutionary computing utilizes evaluationof candidatesolutions
Efficient error correction for speech systems using constrained re-recognition
Yu, Gregory T
2008-01-01T23:59:59.000Z
Efficient error correction of recognition output is a major barrier in the adoption of speech interfaces. This thesis addresses this problem through a novel correction framework and user interface. The system uses constraints ...
V-194: Citrix XenServer Memory Management Error Lets Local Administrat...
Broader source: Energy.gov (indexed) [DOE]
a memory management page reference counting error to gain access on the target host server. IMPACT: A local user on the guest operating system can obtain access on the target...
Error analysis of motion transmission mechanisms : design of a parabolic solar trough
Koniski, Cyril (Cyril A.)
2009-01-01T23:59:59.000Z
This thesis presents the error analysis pertaining to the design of an innovative solar trough for use in solar thermal energy generation fields. The research was a collaborative effort between Stacy Figueredo from Prof. ...
Benestad, R E
2013-01-01T23:59:59.000Z
Comment on Scafetta, Nicola. 'Discussion on Common Errors in Analyzing Sea Level Accelerations, Solar Trends and Global Warming.' arXiv:1305.2812 (May 13, 2013a). doi:10.5194/prp-1-37-2013.
Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error
Ma, Liang
2012-02-14T23:59:59.000Z
This thesis proposes a methodology for determining sensitivity of building energy consumption of HVAC systems to sensor error. It is based on a series of simulations of a generic building, the model for which is based on several typical input...
Minimizing Actuator-Induced Residual Error in Active Space Telescope Primary Mirrors
. Smith, David W. Miller September 2010 SSL #12-10 #12;#12;Minimizing Actuator-Induced Residual Error in Active Space Telescope Primary Mirrors Matthew W. Smith, David W. Miller September 2010 SSL #12
Error analysis of the chirp-z transform when implemented using waveform synthesizers and FFTs
Bielek, T.P.
1990-11-01T23:59:59.000Z
This report analyzes the effects of finite-precision arithmetic on discrete Fourier transforms (DFTs) calculated using the chirp-z transform algorithm. An introduction to the chirp-z transform is given together with a description of how the chirp-z transform is implemented in hardware. Equations for the effects of chirp rate errors, starting frequency errors, and starting phase errors on the frequency spectrum of the chirp-z transform are derived. Finally, the maximum possible errors in the chirp rate, the starting frequencies, and starting phases are calculated and used to compute the worst case effects on the amplitude and phase spectrums of the chirp-z transform. 1 ref., 6 figs.
Title and author(s) Notes on Human Error Analysis and
calibration and testing as found in the US Licensee Event Reports. Available on request from Risø Library JUDGEMENT 4 "HUMAN ERROR" - DEFINITION AND CLASSIFICATION 6 RELIABILITY AND SAFETY ANALYSIS 10 HUMAN FACTORS
Grid-search event location with non-Gaussian error models
Rodi, William L.
This study employs an event location algorithm based on grid search to investigate the possibility of improving seismic event location accuracy by using non-Gaussian error models. The primary departure from the Gaussian ...
Gross Error Detection in Chemical Plants and Refineries for On-Line Optimization
Pike, Ralph W.
Gross Error Detection in Chemical Plants and Refineries for On-Line Optimization Xueyu Chen, Derya) British Petroleum Applications mainly crude units in refineries and ethylene plants #12;Companies
Stability of error bounds for semi-infinite convex constraint systems
2010-01-07T23:59:59.000Z
stable if all its “small” perturbations admit a (local or global) error bound. ... where T is a compact, possibly infinite, Hausdorff space, ft : Rn ? R, t ? T, are given ...
Gilles Lachaud For detecting and correcting the inevitable errors which creep in during
Provence Aix-Marseille I, Université de
Gilles Lachaud For detecting and correcting the inevitable errors which creep in during digital by the grea- test possible number of discs of the same size without any overlaps. #12;The words of a message
On the evaluation of human error probabilities for post-initiating events
Presley, Mary R
2006-01-01T23:59:59.000Z
Quantification of human error probabilities (HEPs) for the purpose of human reliability assessment (HRA) is very complex. Because of this complexity, the state of the art includes a variety of HRA models, each with its own ...
Error and uncertainty in estimates of Reynolds stress using ADCP in an energetic ocean state
Rapo, Mark Andrew.
2006-01-01T23:59:59.000Z
(cont.) To that end, the space-time correlations of the error, turbulence, and wave processes are developed and then utilized to find the extent to which the environmental and internal processing parameters contribute to ...
Combined wavelet video coding and error control for internet streaming and multicast
Chu, Tianli
2002-01-01T23:59:59.000Z
In the past several years, advances in Internet video streaming have been tremendous. Originally designed without error protection, Receiver-driven layered multicast (RLM) has proved to be a very effective scheme for scalable video multicast. Though...
The Effect of OCR Errors on Stylistic Text Classification Sterling Stuart Stein
The Effect of OCR Errors on Stylistic Text Classification Sterling Stuart Stein Linguistic retrieval; Taghva and Coombs [1] found that a search engine could be made to work well over OCR documents
Locatelli, R.
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...
Conway, Barbara Tenney
2012-10-19T23:59:59.000Z
.............................................................................. x NOMENCLATURE ............................................................................. xi CHAPTER I INTRODUCTION ............................................................ 1 Theories of Spelling Development... levels in school. Stage theory of spelling development has provided a solid structure upon which spelling curricula can be designed, and spelling error analysis serves as the foundational screening component for planning of instruction (Bear...
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01T23:59:59.000Z
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Gavini, Shanti
2001-01-01T23:59:59.000Z
CODE ASSIGNMENT OF RATE COMPATIBLE PUNCTURED CONVOLUTIONAL CODES F' OR UNEQUAL ERROR PROTECTION REQUIRElvIENTS A Thesis by SHANTI GAVIUI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of thc... requirements for the degree of MASTER OF SCIENCE May 2001 Major Subject: Electrical Engineering CODE ASSIGNMENT OF RATE COMPATIBLE PUNCTURED CONVOLUTIONAL CODES FOR UNEQUAL ERROR PROTECTION REQUIREMENTS A Thesis by SHANTI GAVINI Submitted to Texas A...
Estimating rock properties in two phase petroleum reservoirs: an error analysis
Paul, Anthony Ian
1983-01-01T23:59:59.000Z
ESTIMATING ROCK PROPERTIES IN TWO PHASE PETROLEUM RESERVOIRS: AN ERROR ANALYSIS A Thesis by ANTHONY IAN PAUL Submitted to the Graduate College of Texas AE:M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE December 1983 Maior Subjecu Chemical Engineering ESTIMATING ROCK PROPERTIES IN TWO PHASE PETROLEUM RESERVOIRS: AN ERROR ANALYSIS A Thesis by ANTHONY IAN PAUL Approved as to style and content by: A. T. Watson (Chairman of Commiuee) C. J...
Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring
Bunch, S.C.; Holmes, J.
2004-01-01T23:59:59.000Z
We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.
Kaeli, David R.
Case Study: Soft Error Rate Analysis in Storage Systems Brian Mullins, Hossein Asadi, Mehdi B Soft errors due to cosmic particles are a growing relia- bility threat for VLSI systems. In this paper we analyze the soft error vulnerability of FPGAs used in storage systems. Since the reliability
Kaeli, David R.
Case Study: Soft Error Rate Analysis in Storage Systems Brian Mullins, Hossein Asadi, Mehdi B the soft error vulnerability of FPGAs used in storage systems. Since the reliability requirements of such systems play a critical role in overall system reliability. We have val idated soft error projections
Managing Errors to Reduce Accidents in High Consequence Networked Information Systems
Ganter, J.H.
1999-02-01T23:59:59.000Z
Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classify these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.
Entanglement-Assisted Quantum Error-Correcting Codes with Imperfect Ebits
Ching-Yi Lai; Todd A. Brun
2012-04-04T23:59:59.000Z
The scheme of entanglement-assisted quantum error-correcting (EAQEC) codes assumes that the ebits of the receiver are error-free. In practical situations, errors on these ebits are unavoidable, which diminishes the error-correcting ability of these codes. We consider two different versions of this problem. We first show that any (nondegenerate) standard stabilizer code can be transformed into an EAQEC code that can correct errors on the qubits of both sender and receiver. These EAQEC codes are equivalent to standard stabilizer codes, and hence the decoding techniques of standard stabilizer codes can be applied. Several EAQEC codes of this type are found to be optimal. In a second scheme, the receiver uses a standard stabilizer code to protect the ebits, which we call a "combination code." The performances of different quantum codes are compared in terms of the channel fidelity over the depolarizing channel. We give a formula for the channel fidelity over the depolarizing channel (or any Pauli error channel), and show that it can be efficiently approximated by a Monte Carlo calculation. Finally, we discuss the tradeoff between performing extra entanglement distillation and applying an EAQEC code with imperfect ebits.
Processing Quantities with Heavy-Tailed Distribution of Measurement Uncertainty: How
Kreinovich, Vladik
Processing Quantities with Heavy-Tailed Distribution of Measurement Uncertainty: How to Estimate, the distribution of measurement errors is sometimes heavy-tailed, when very large values have a reasonable, in the amount of oil in an oil well, etc. In such situations in which we cannot measure y directly, we can often
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadap Documentation TDMADAP : XDCnarrowbandheat flux ARM DatagovMeasurementsVisibilityMeasurements
Constructivism, measurement, mathematics Concepts of measurement
Hennig, Christian
Constructivism, measurement, mathematics Concepts of measurement Measurement and statistics Conclusion Measurement as a constructive act - a statistician's view Christian Hennig March 14, 2013 Christian Hennig Measurement as a constructive act - a statistician's view #12;Constructivism, measurement
Palle E. T. Jorgensen
2007-07-23T23:59:59.000Z
While finite non-commutative operator systems lie at the foundation of quantum measurement, they are also tools for understanding geometric iterations as used in the theory of iterated function systems (IFSs) and in wavelet analysis. Key is a certain splitting of the total Hilbert space and its recursive iterations to further iterated subdivisions. This paper explores some implications for associated probability measures (in the classical sense of measure theory), specifically their fractal components. We identify a fractal scale $s$ in a family of Borel probability measures $\\mu$ on the unit interval which arises independently in quantum information theory and in wavelet analysis. The scales $s$ we find satisfy $s\\in \\mathbb{R}_{+}$ and $s\
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11T23:59:59.000Z
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Huang, Weidong
2011-01-01T23:59:59.000Z
This paper presents the general equation to calculate the standard deviation of reflected ray error from optical error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 8 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope errors in two direction is transferred to any one direction of the focus ray when the incidence angle is more than 0 for solar trough and heliostats reflector; for point focus Fresnel lens, point focus parabolic glass mirror, line focus parabolic galss mirror, the error transferring coefficient from optical to focus ray will increase when the rim angle increase; for TIR-R concentrator, it will decrease; for glass heliostat, it relates to the incidence angle and azimuth of the reflecting point. Keywords: optic error, standard deviation, refractive ray error, concentrated solar collector
Renaut, Rosemary
in PIB-PET study Hongbin Guo,a, Rosemary A Renauta, Kewei Chenb, Eric M Reimanb a Arizona State. Tel: 1-480-965-8002, Fax: 1-480-965-4160. Email address: hb guo@asu.edu (Hongbin Guo) Preprint
Van Peursem, David J.
1991-01-01T23:59:59.000Z
. C. Experimental Errors IV. SPEED-OF-SOUND . . A. Research Method. B. Data Reduction and Analysis. . . 1. Perfect Data. a. First-Order Model Consistency Test. . . . . b. Second-Order Model Consistency Test . . . 2. Random Error Induced Data. 3.... . 2. Random Error Induced Data. 3. Systematic Error Induced Data. a. Fixed Absolute Errors. . . b. Fixed Fractional Errors, VI. CONCLUSIONS, LIST OF SYMBOLS . REFERENCES. APPENDIX A: SIMULATION LABORATORY DATA. A. Perfect Speed-of-Sound. B...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem NotEnergy,ARMForms About Become a User Recovery ActgovMeasurements Measurement
Thorough approach to measurement uncertainty analysis applied to immersed heat exchanger testing
Farrington, R.B.; Wells, C.V.
1986-04-01T23:59:59.000Z
This paper discusses the value of an uncertainty analysis, discusses how to determine measurement uncertainty, and then details the sources of error in instrument calibration, data acquisition, and data reduction for a particular experiment. Methods are discussed to determine both the systematic (or bias) error in an experiment as well as to determine the random (or precision) error in the experiment. The detailed analysis is applied to two sets of conditions in measuring the effectiveness of an immersed coil heat exchanger. It shows the value of such analysis as well as an approach to reduce overall measurement uncertainty and to improve the experiment. This paper outlines how to perform an uncertainty analysis and then provides a detailed example of how to apply the methods discussed in the paper. The authors hope this paper will encourage researchers and others to become more concerned with their measurement processes and to report measurement uncertainty with all of their test results.
Sommargren, Gary E. (Santa Cruz, CA); Campbell, Eugene W. (Livermore, CA)
2005-06-21T23:59:59.000Z
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Sommargren, Gary E.; Campbell, Eugene W.
2004-03-09T23:59:59.000Z
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
LCLS X-ray mirror measurements using a large aperture visible light interferometer
McCarville, T; Soufli, R; Pivovaroff, M
2011-03-02T23:59:59.000Z
Synchrotron or FEL X-ray mirrors are required to deliver an X-ray beam from its source to an experiment location, without contributing significantly to wave front distortion. Accurate mirror figure measurements are required prior to installation to meet this intent. This paper describes how a 300 mm aperture phasing interferometer was calibrated to <1 nm absolute accuracy and used to mount and measure 450 mm long flats for the Linear Coherent Light Source (LCLS) at Stanford Linear Accelerator Center. Measuring focus mirrors with an interferometer requires additional calibration, because high fringe density introduces systematic errors from the interferometer's imaging optics. This paper describes how these errors can be measured and corrected. The calibration approaches described here apply equally well to interferometers larger than 300 mm aperture, which are becoming more common in optics laboratories. The objective of this effort was to install LCLS flats with < 10 nm of spherical curvature, and < 2 nm rms a-sphere. The objective was met by measuring the mirrors after fabrication, coating and mounting, using a 300 mm aperture phasing interferometer calibrated to an accuracy < 1 nm. The key to calibrating the interferometer accurately was to sample the error using independent geometries that are available. The results of those measurements helped identify and reduce calibration error sources. The approach used to measure flats applies equally well to focus mirrors, provided an additional calibration is performed to measure the error introduced by fringe density. This calibration has been performed on the 300 mm aperture interferometer, and the measurement correction was evaluated for a typical focus mirror. The 300 mm aperture limitation requires stitching figure measurements together for many X-ray mirrors of interest, introducing another possible error source. Stitching is eliminated by applying the calibrations described above to larger aperture instruments. The authors are presently extending this work to a 600 mm instrument. Instruments with 900 mm aperture are now becoming available, which would accommodate the largest mirrors of interest.
Lamb, James M., E-mail: jlamb@mednet.ucla.edu; Agazaryan, Nzhde; Low, Daniel A.
2013-10-01T23:59:59.000Z
Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments.
Hodge, B. M.; Lew, D.; Milligan, M.
2013-01-01T23:59:59.000Z
Load forecasting in the day-ahead timescale is a critical aspect of power system operations that is used in the unit commitment process. It is also an important factor in renewable energy integration studies, where the combination of load and wind or solar forecasting techniques create the net load uncertainty that must be managed by the economic dispatch process or with suitable reserves. An understanding of that load forecasting errors that may be expected in this process can lead to better decisions about the amount of reserves necessary to compensate errors. In this work, we performed a statistical analysis of the day-ahead (and two-day-ahead) load forecasting errors observed in two independent system operators for a one-year period. Comparisons were made with the normal distribution commonly assumed in power system operation simulations used for renewable power integration studies. Further analysis identified time periods when the load is more likely to be under- or overforecast.
Ricciardi, S; Natoli, P; Polenta, G; Baccigalupi, C; Salerno, E; Kayabol, K; Bedini, L; De Zotti, G; 10.1111/j.1365-2966.2010.16819.x
2010-01-01T23:59:59.000Z
We present a data analysis pipeline for CMB polarization experiments, running from multi-frequency maps to the power spectra. We focus mainly on component separation and, for the first time, we work out the covariance matrix accounting for errors associated to the separation itself. This allows us to propagate such errors and evaluate their contributions to the uncertainties on the final products.The pipeline is optimized for intermediate and small scales, but could be easily extended to lower multipoles. We exploit realistic simulations of the sky, tailored for the Planck mission. The component separation is achieved by exploiting the Correlated Component Analysis in the harmonic domain, that we demonstrate to be superior to the real-space application (Bonaldi et al. 2006). We present two techniques to estimate the uncertainties on the spectral parameters of the separated components. The component separation errors are then propagated by means of Monte Carlo simulations to obtain the corresponding contributi...
Fade-resistant forward error correction method for free-space optical communications systems
Johnson, Gary W. (Livermore, CA); Dowla, Farid U. (Castro Valley, CA); Ruggiero, Anthony J. (Livermore, CA)
2007-10-02T23:59:59.000Z
Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.
Error and jitter effect studies on the SLED for BEPCII-linac
Shi-Lun, Pei; Ou-Zheng, Xiao
2011-01-01T23:59:59.000Z
RF pulse compressor is a device to convert a long RF pulse to a short one with much higher peak RF magnitude. SLED can be regarded as the earliest RF pulse compressor used in large scale linear accelerators. It is widely studied around the world and applied in the BEPC and BEPCII linac for many years. During the routine operation, the error and jitter effects will deteriorate the SLED performance either on the output electromagnetic wave amplitude or phase. The error effects mainly include the frequency drift induced by cooling water temperature variation and the frequency/Q0/{\\beta} unbalances between the two energy storage cavities caused by mechanical fabrication or microwave tuning. The jitter effects refer to the PSK switching phase and time jitters. In this paper, we re-derived the generalized formulae for the conventional SLED used in the BEPCII linac. At last, the error and jitter effects on the SLED performance are investigated.
HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD
Harold S. Blackman; David I. Gertman; Ronald L. Boring
2008-09-01T23:59:59.000Z
This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G. (Mount Kisco, NY); Chen, Dong (Croton On Husdon, NY); Coteus, Paul W. (Yorktown Heights, NY); Flynn, William T. (Rochester, MN); Marcella, James A. (Rochester, MN); Takken, Todd (Brewster, NY); Trager, Barry M. (Yorktown Heights, NY); Winograd, Shmuel (Scarsdale, NY)
2011-08-30T23:59:59.000Z
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Multiparameter measurement utilizing poloidal polarimeter for burning plasma reactor
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi; Kusama, Yoshinori [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki, 311-0193 (Japan)
2014-08-21T23:59:59.000Z
The authors have made the basic and applied research on the polarimeter for plasma diagnostics. Recently, the authors have proposed an application of multiparameter measurement (magnetic field, B, electron density, n{sub e}, electron temperature, T{sub e}, and total plasma current, I{sub p}) utilizing polarimeter to future fusion reactors. In this proceedings, the brief review of the polarimeter, the principle of the multiparameter measurement and the progress of the research on the multiparameter measurement are explained. The measurement method that the authors have proposed is suitable for the reactor for the following reasons; multiparameters can be obtained from a small number of diagnostics, the proposed method does not depend on time-history, and far-infrared light utilized by the polarimeter is less sensitive to degradation of of optical components. Taking into account the measuring error, performance assessment of the proposed method was carried. Assuming that the error of ?? and ?? were 0.1° and 0.6°, respectively, the error of reconstructed j{sub ?}, n{sub e} and T{sub e} were 12 %, 8.4 % and 31 %, respectively. This study has shown that the reconstruction error can be decreased by increasing the number of the wavelength of the probing laser and by increasing the number of the viewing chords. For example, By increasing the number of viewing chords to forty-five, the error of j{sub ?}, n{sub e} and T{sub e} were reduced to 4.4 %, 4.4 %, and 17 %, respectively.
Aguilar-Arevalo, A A; Bazarko, A O; Brice, S J; Brown, B C; Bugel, L; Cao, J; Coney, L; Conrad, J M; Cox, D C; Curioni, A; Djurcic, Z; Finley, D A; Fleming, B T; Ford, R; Garcia, F G; Garvey, G T; Gonzales, J; Grange, J; Green, C; Green, J A; Hart, T L; Hawker, E; Imlay, R; Johnson, R A; Karagiorgi, G; Kasper, P; Katori, T; Kobilarcik, T; Kourbanis, I; Koutsoliotas, S; Laird, E M; Linden, S K; Link, J M; Liu, Y; Louis, W C; Mahn, K B M; Marsh, W; Mauger, C; McGary, V T; McGregor, G; Metcalf, W; Meyers, P D; Mills, F; Mills, G B; Monroe, J; Moore, C D; Mousseau, J; Nelson, R H; Nienaber, P; Nowak, J A; Osmanov, B; Ouedraogo, S; Patterson, R B; Pavlovic, Z; Perevalov, D; Polly, C C; Prebys, E; Raaf, J L; Ray, H; Roe, B P; Russell, A D; Sandberg, V; Schirato, R; Schmitz, D; Shaevitz, M H; Shoemaker, F C; Smith, D; Soderberg, M; Sorel, M; Spentzouris, P; Spitz, J; Stancu, I; Stefanski, R J; Sung, M; Tanaka, H A; Tayloe, R; Tzanov, M; Van de Water, R G; Wascko, M O; White, D H; Wilking, M J; Yang, H J; Zeller, G P; Zimmerman, E D
2009-01-01T23:59:59.000Z
MiniBooNE reports the first absolute cross sections for neutral current single \\pi^0 production on CH_2 induced by neutrino and antineutrino interactions measured from the largest sets of NC \\pi^0 events collected to date. The principal result consists of differential cross sections measured as functions of \\pi^0 momentum and \\pi^0 angle averaged over the neutrino flux at MiniBooNE. We find total cross sections of (4.76+/-0.05_{stat}+/-0.40_{sys})*10^{-40} cm^2/nucleon at a mean energy of =808 MeV and (1.48+/-0.05_{stat}+/-0.14_{sys})*10^{-40} cm^2/nucleon at a mean energy of =664 MeV for \
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadap Documentation TDMADAP : XDCnarrowbandheat flux ARM DatagovMeasurementsVisibility ARMProject
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadap Documentation TDMADAP : XDCnarrowbandheat flux ARM DatagovMeasurementsVisibility
Earth's Magnetic Field Measurements for the LCLS Undulators
Hacker, Kirsten
2010-12-13T23:59:59.000Z
Measurements of the earth's magnetic field at several locations at SLAC were conducted to determine the possible field error contribution from tuning the undulators in a location with a different magnetic field than that which will be found in the undulator hall. An average difference of 0.08 {+-} 0.04 Gauss has been measured between the downward earth's field components in the test facility and SLAC tunnel locations.
Comer, K.; Gaddy, C.D.; Seaver, D.A.; Stillwell, W.G.
1985-01-01T23:59:59.000Z
The US Nuclear Regulatory Commission and Sandia National Laboratories sponsored a project to evaluate psychological scaling techniques for use in generating estimates of human error probabilities. The project evaluated two techniques: direct numerical estimation and paired comparisons. Expert estimates were found to be consistent across and within judges. Convergent validity was good, in comparison to estimates in a handbook of human reliability. Predictive validity could not be established because of the lack of actual relative frequencies of error (which will be a difficulty inherent in validation of any procedure used to estimate HEPs). Application of expert estimates in probabilistic risk assessment and in human factors is discussed.
Error Channels and the Threshold for Fault-tolerant Quantum Computation
Bryan Eastin
2007-10-15T23:59:59.000Z
This dissertation treats the topics of threshold calculation, ancilla construction, and non-standard error models. Chapter 2 introduces background material ranging from quantum mechanics to classical coding to thresholds for quantum computation. In Chapter 3 numerical and analytical means are used to generate estimates of and bounds on the threshold given an error model described by a restricted stochastic Pauli channel. Chapter 4 develops a simple, flexible means of estimating the threshold and applies it to some cases of interest. Finally, a novel method of ancilla construction is proposed in Chapter 5, and the difficulties associated with implementing it are discussed.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10T23:59:59.000Z
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of ScienceandMesa- PolarizationgovCampaignsSummergovField CampaignsMidlatitudegovMeasurementsSurface
Cooper, S.E. [Science Application International Corp., Reston, VA (United States); Wreathall, J. [John Wreathall & Co., Dublin, OH (United States); Thompson, C.M., Drouin, M. [Nuclear Regulatory Commission, Washington, DC (United States); Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States)
1996-10-01T23:59:59.000Z
This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.
Dual-axis hole-drilling ESPI residual stress measurements
Steinzig, Michael [Los Alamos National Laboratory; Schajer, Gary [UNIV OF BRITISH COLUMBIA
2008-01-01T23:59:59.000Z
A novel dual-axis ESPI hole-drilling residual stress measurement method is presented. The method enables the evaluation of all the in-plane normal stress components with similar response to measurement errors, significantly lower than with single-axis measurements. A numerical method is described that takes advantage of, and compactly handles, the additional optical data that are available from the second measurement axis. Experimental tests were conducted on a calibrated specimen to demonstrate the proposed method, and the results supported theoretical expectations.
Norton, David Jerry
1963-01-01T23:59:59.000Z
, Therefore, rapid voltages appeared across the cell corresponding to each of the aluminum markers. One of the alum&num markers was made larger than the others. The increased Light reflected to the phoroconductive cell caused a smaller voltage drop across...
Norton, David Jerry
1963-01-01T23:59:59.000Z
ower load =0 40 b 0 p 9 -4. I 4. 2 4 ~ compreSSlo n r a?o FI 6 I 2, EFFI'C I ENC Y II S. COMPRESSION RATIO 17 Fig. 13. Schematic drawing of the mechanical system. The equation of motion for this familiar system is, ~ I NX + C(X-Y) + K(X-Y) = F...
Steven M Taylor
2007-04-10T23:59:59.000Z
Systematic error in calculation of z for high redshift type Ia supernovae could help explain unexpected luminosity values that indicate an accelerating rate of expansion of the universe.
Error growth in poor ECMWF forecasts over the contiguous United States
Modlin, Norman Ray
1993-01-01T23:59:59.000Z
are found to have the majority of RMS growth on day I while poor forecasts do not experience rapid error growth until days 3 and 4. For poor forecasts, the leading EOFs reveal a wave pattern down stream of the Rocky Mountains. This pattern evolves...
PUBLISHED IN: PROCEEDINGS OF THE IEEE ICC 2013 1 Towards an Error Control Scheme for a
Chatziantoniou, Damianos
evaluation of its performance. An obvious use case for our scheme is the reliable delivery of softwarePUBLISHED IN: PROCEEDINGS OF THE IEEE ICC 2013 1 Towards an Error Control Scheme for a Publish for efficient content distribution. However, the design of efficient reliable transport protocols for multicast
The Influence of Source and Cost of Information Access on Correct and Errorful Interactive Behavior
Gray, Wayne
The Influence of Source and Cost of Information Access on Correct and Errorful Interactive Behavior USA +1 703 993 1357 gray@gmu.edu ABSTRACT Routine interactive behavior reveals patterns of interaction to perform the task. Such interactions are difficult to study, in part, because they require collecting
Paper No. 12A-12 ERRORS IN DESIGN LEADING TO PILE FAILURES DURING SEISMIC LIQUEFACTION
Bolton, Malcolm
Paper No. 12A-12 1 ERRORS IN DESIGN LEADING TO PILE FAILURES DURING SEISMIC LIQUEFACTION Subhamoy.K) University of Cambridge (U.K) ABSTRACT Collapse of piled foundations in liquefiable soils has been observed. The current method of pile design under earthquake loading is based on a bending mechanism where the inertia
Using Energy-Efficient Overlays to Reduce Packet Error Rates in Wireless Ad-Hoc Networks
Khan, Bilal
the problem of how to balance Western Michigan University, MI. John Jay College of Criminal Justice, City1 Using Energy-Efficient Overlays to Reduce Packet Error Rates in Wireless Ad-Hoc Networks A. Al-Fuqaha G. Ben Brahim M. Guizani B. Khan Abstract-- In this paper we present new energy-efficient tech
Proving the Absence of RunTime Errors in SafetyCritical Avionics Code
Cousot, Patrick
, timetriggered, realtime, safety critical, embedded software as found in earth transportation, nuclearProving the Absence of RunTime Errors in SafetyCritical Avionics Code Patrick Cousot École is not acceptable in safety and mission crit ical applications. An avenue is therefore opened for formal methods
Particle-induced bit errors in high performance fiber optic data links for satellite data management
Marshall, P.W.; Carts, M.A. (Naval Research Lab., Washington, DC (United States) SFA, Inc., Landover, MD (United States)); Dale, C.J. (Naval Research Lab., Washington, DC (United States)); LaBel, K.A. (NASA Goddard Space Flight Center, Greenbelt, MD (United States))
1994-12-01T23:59:59.000Z
Experimental test methods and analysis tools are demonstrated to assess particle-induced bit errors on fiber optic link receivers for satellites. Susceptibility to direct ionization from low LET particles is quantified by analyzing proton and helium ion data as a function of particle LET. Existing single event analysis approaches are shown to apply, with appropriate modifications, to the regime of temporally (rather than spatially) distributed bits, even though the sensitivity to single events exceeds conventional memory technologies by orders of magnitude. The cross-section LET dependence follows a Weibull distribution at data rates from 200 to 1,000 Mbps and at various incident optical power levels. The LET threshold for errors is shown, through both experiment and modeling, to be 0 in all cases. The error cross-section exhibits a strong inverse dependence on received optical power in the LET range where most orbital single events would occur, thus indicating that errors can be minimized by operating links with higher incident optical power. Also, an analytic model is described which incorporates the appropriate physical characteristics of the link as well as the optical and receiver electrical characteristics. Results indicate appropriate steps to assure suitable link performance even in severe particle orbits.
Publish/Subscribe Systems on Node and Link Error-Prone
tower Cellular W ireless LA N #12;Motivations Mobile environments are error prone Â· Wireless link Â· Comparison pub/sub to client- server and polling models ES EBS ES/ ED Radio tower ES Cellular Wireless Node (,T) (cost of periodic publish or polling) s(n) (effect of sharing among n subscribers) tps (time
Calibration of Visually Guided Reaching Is Driven by Error-Corrective Learning and Internal Dynamics
Sabes, Philip
Calibration of Visually Guided Reaching Is Driven by Error-Corrective Learning and Internal Submitted 22 August 2006; accepted in final form 16 December 2006 Cheng S, Sabes PN. Calibration of visually3069, 2007. First published January 3, 2007; doi:10.1152/ jn.00897.2006. The sensorimotor calibration
A New Error Control Scheme for Packetized Voice over HighSpeed Local Area Networks
Liebeherr, Jörg
propose a new error control mechanism for packet voice, referred to as Slack ARQ (SARQ). SARQ is based or priority channels. It does not require hardware support, imposes little overhead on network resources use of network resources than circuit switching. Statistical multiplexing, however, causes delay
Power Control by Kalman Filter With Error Margin for Wireless IP Networks
Leung, Kin K.
Power Control by Kalman Filter With Error Margin for Wireless IP Networks Kin K. Leung AT&T Labs, Room 4-120 100 Schulz Drive Red Bank, NJ 07701 Email: kkleung@research.att.com ABSTRACT A power-control enough due to little interference temporal correlation. In this paper, we enhance the power-control
Low-Power and Error Coding for Network-on-Chip Traffic
Jantsch, Axel
frameworks of simulation perform- ance and power estimation, Section 4 presents the setup for simulation and tele- communication systems. There are several works, describing methods to esti- mate powerLow-Power and Error Coding for Network-on-Chip Traffic Arseni Vitkovski, Raimo Haukilahti, Axel
Bifurcated states of a rotating tokamak plasma in the presence of a static error-field
Fitzpatrick, Richard
Bifurcated states of a rotating tokamak plasma in the presence of a static error-field Richard, Texas 78712 Received 20 January 1998; accepted 1 June 1998 The bifurcated states of a rotating tokamak without hindrance. The response regime of a rotating tokamak plasma in the vicinity of the rational
Deriving Human-Error Tolerance Requirements from Tasks Peter Wright, Bob Fields and Michael Harrison
Fields, Bob
a means whereby, rather than relying on training as a means of improving operator performance, designers- logy (SHARP) by employing a software engineering nota- tion (CSP) that provides a bridge between to human error, describe a task notationbased on CSP which helps us to elicit requirements on human
A nonideal error-field response model for strongly shaped tokamak R. Fitzpatrick
Fitzpatrick, Richard
damping at Alfvén and/or sound wave resonances inside the plasma. The nonresonant component magnetic flux-surfaces.7 Such chains severely degrade global energy confinement.8 Fortunately, the highly the relationship between the har- monic content of an error-field and the associated locking torque that is exerted
Automated Diagnosis of Product-line Configuration Errors on Feature Models
Schmidt, Douglas C.
Automated Diagnosis of Product-line Configuration Errors on Feature Models Jules White and Doulas Feature models are widely used to model software product-line (SPL) variability. SPL variants are config Introduction Current trends and challenges. Software Product- Lines (SPLs) are a technique for creating
Detection and Prediction of Errors in EPCs of the SAP Reference Model
van der Aalst, Wil
as a blueprint for roll-out projects of SAP's ERP system. It reflects Version18 4.6 of SAP R/3 which was marketedDetection and Prediction of Errors in EPCs of the SAP Reference Model J. Mendling a, H.M.W. Verbeek provide empirical evidence for these questions based on the SAP reference model. This model collection
Allowing Errors in Speech over Wireless LANs Ian Chakeres, Hui Dong, Elizabeth Belding-Royer
Belding-Royer, Elizabeth M.
in call quality. By forcing error-free reception of speech, scarce bandwidth and energy are unnecessarily- dard are experiencing widespread deployment. Currently most devices with WLAN connectivity are laptops. In congested networks, fewer retransmissions reduce the channel usage and result in increased packet delivery
Speech enhancement using a minimum mean-square error short-time spectral modulation magnitude In this paper we investigate the enhancement of speech by applying MMSE short-time spectral magnitude estimation on the quality of enhanced speech, and find that this method works better with speech uncertainty. Finally we
Merchant Commodity Storage and Term Structure Model Error Nicola Secomandi,1
Sadeh, Norman M.
the futures term structure affect the valuation and hedging of natural gas storage. We find that even small impact on storage valuation and hedging. In particular, theoretically equivalent hedging strategies haveMerchant Commodity Storage and Term Structure Model Error Nicola Secomandi,1 Guoming Lai,2 Fran
Evaluating the Capability of Compilers and Tools to Detect Serial and Parallel Run-time Errors
Luecke, Glenn R.
, Elizabeth Kleiman, Olga Weiss, Andre Wehe, Melissa Yahya # Iowa State University's High Performance of system software to detect and issue error messages that help programmers quickly fix serial and parallel using the new system software to rted.project@iastate.edu so they can be posted on this web site. II
A Survey of Systems for Detecting Serial Run-Time Errors
Luecke, Glenn R.
Performance Computing Group Glenn R. Luecke, James Coyle, Jim Hoekstra, Marina Kraeva, Ying Li, Olga Taborskaia, and Yanmei Wang {grl, jjc, hoekstra, kraeva, yingli, olga, yanmei}@iastate.edu Revised February-commercial software products to detect serial run-time errors in C and C++ programs, to issue meaningful messages
On the Characterization and Optimization of On-Chip Cache Reliability against Soft Errors
Ziavras, Sotirios G.
--Soft errors induced by energetic particle strikes in on-chip cache memories have become an increasing challenge in designing new generation reliable microprocessors. Previous efforts have exploited information. In this paper, we propose a new framework for conducting comprehensive studies and characterization
Vibrotactile Feedback in Steering Wheel Reduces Navigation Errors during GPS-Guided Car Driving
Basdogan, Cagatay
vibration motors are mounted onto the steering wheel of a driving simulator and driving experiments-based car navigation system to improve the navigation performance of a driver. In [5], vibration motors were auditory noise and distraction exist in the environment, the navigation errors (making a wrong turn
Error analysis due to laser beams misalignment of a double laser self-mixing velocimeter
Tanios, Bendy; Bony, Francis; Bosch, Thierry [CNRS, LAAS, 7 avenue du colonel Roche, F-31400 Toulouse (France) and Univ de Toulouse, UPS, LAAS, F-31400 Toulouse (France); CNRS, LAAS, 7 avenue du colonel Roche, F-31400 Toulouse, France and Univ de Toulouse, INP, LAAS, F-31400 Toulouse (France)
2012-06-13T23:59:59.000Z
In this paper, we present a self-mixing double-head laser diode velocimeter. Analyzes are performed to evaluate the sensitivity to misalignment for this setup and calculate errors due to this misalignment. The analyses and calculations are verified by experimental results.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)
2011-10-01T23:59:59.000Z
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Improving End of Life Care: an Information Systems Approach to Reducing Medical Errors
Kopec, Danny
, quality of care, end of life, electronic health records Introduction This research will evaluate the waysImproving End of Life Care: an Information Systems Approach to Reducing Medical Errors TAMANG S. a, CUNY, USA b Department of Research, Metropolitan Jewish Health System, NY, USA Abstract. Chronic