Evaluating specific error characteristics of microwave-derived cloud liquid water products
Christopher, Sundar A.
of cloud LWP products globally using concurrent data from visible/ infrared satellite sensors. The approachEvaluating specific error characteristics of microwave-derived cloud liquid water products Thomas J microwave satellite measurements. Using coincident visible/infrared satellite data, errors are isolated
Walker, Jeff
of its high albedo and thermal and water storage properties. Snow is also the largest varying landscape for the 1990-1991 snow season (November-April) have been examined. Dense vegetation, especially in the taiga snow crystals evolve with the progression of the season also contribute to the errors. In general
Thermodynamics of error correction
Pablo Sartori; Simone Pigolotti
2015-04-24T23:59:59.000Z
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
On a fatal error in tachyonic physics
Edward Kapu?cik
2013-08-10T23:59:59.000Z
A fatal error in the famous paper on tachyons by Gerald Feinberg is pointed out. The correct expressions for energy and momentum of tachyons are derived.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of WntSupport Homelessand RenewableSyntheticSystematic Errors of MiniBooNE
Remarks on statistical errors in equivalent widths
Klaus Vollmann; Thomas Eversberg
2006-07-03T23:59:59.000Z
Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01T23:59:59.000Z
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Simulating Bosonic Baths with Error Bars
Mischa P. Woods; M. Cramer; M. B. Plenio
2015-04-07T23:59:59.000Z
We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths the truncation of both, the number of modes and the local Hilbert-space dimensions is necessary. We derive super-exponential Lieb--Robinson-type bounds on the error when restricting the bath to finitely-many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.
Stabilizer Formalism for Operator Quantum Error Correction
Poulin, D
2005-01-01T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the language of the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure without affecting its essential properties. This opens the path to possible improvement of the error threshold of fault tolerant quantum computing. We also derive a modified Hamming bound that applies to all stabilizer codes, including degenerate ones.
Monte Carlo errors with less errors
Ulli Wolff
2006-11-29T23:59:59.000Z
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Olson, Eric J.
2013-06-11T23:59:59.000Z
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Reversible (unitary) Ancillary qbits Controlled gates (cX, cZ) #12;Measurement Deterministic Duplication;Decoding use ancillary bits to determine what error occurred #12;Decoding use ancillary bits to determine what error occurred set to 0 if first two bits equal, set to 1 if not #12;Decoding use ancillary bits
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01T23:59:59.000Z
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
Abdelhamid Awad Aly Ahmed, Sala
2008-10-10T23:59:59.000Z
QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major... Subject: Computer Science QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY...
Quantum Error Correction Workshop on
Grassl, Markus
Error Correction Avoiding Errors: Mathematical Model decomposition of the interaction algebra;Quantum Error Correction Designed Hamiltonians Main idea: "perturb the system to make it more stable" · fast (local) control operations = average Hamiltonian with more symmetry (cf. techniques from NMR
Quantifying truncation errors in effective field theory
R. J. Furnstahl; N. Klco; D. R. Phillips; S. Wesolowski
2015-06-03T23:59:59.000Z
Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Mei{\\ss}ner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered, and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL
Rudnyi, Evgenii B.
to apply. The algorithm to maximize a likelihood function in the case of a non-linear physico - the same variances of errors 3.1. One-way classification 3.2. Linear regression 4. Real case (vaporizationSTATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL E.B. Rudnyi Department of Chemistry
Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23 362 334 318Cubic Feet) YearSalesNew2003 Detailed Tables .Errors of Nonobservation Finally,
Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23 362 334 318 706Production% of41.1Diesel prices increase nationally TheCold Fusion Error Unexpected
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article) | SciTech Connect Journal Article: X-rayContract Documents PPPL The files|DisclaimersFeature featured2Cold Fusion Error
Uncertainty quantification and error analysis
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01T23:59:59.000Z
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15T23:59:59.000Z
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
On the error estimates for the rotational pressure-correction ...
2004-06-11T23:59:59.000Z
Dec 19, 2003 ... that may be viewed as a predictor-corrector strategy aiming at .... Since for projection methods the treatment of the nonlinear term does not ... In practice, the nonlin- .... One derives immediately from the standard PDE theory that .... Let us first write the equations that control the time increments of the errors.
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01T23:59:59.000Z
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
see Type I decision error see Type II decision error
-1, 12, 22; 4-11; 5-46 to 51; 7-7; 8-1, 2, 15, 16, 22, 24, 27; A-5; N-16 areas 2-5 HSA/scoping 2 INDEX see Type I decision error see Type II decision error 91b material 3-5 Amin area-25; 8-11, 17 area evaluation & HSA 3-11 classification 2-4, 5, 17, 28; 4-11 contaminated 2-3 land
DATA COMPRESSION USING WAVELETS: ERROR ...
1910-90-11T23:59:59.000Z
algorithms that introduce differences between the original and compressed data in ... to choose an error metric that parallels the human visual system, so that image .... signal data along a communications channel, one sends integer codes that ...
The Challenge of Quantum Error Correction.
Fominov, Yakov
in the design of physical bits. #12;What we need Hardware requirements: 1. Many 103-104 / R individual bits (R flip classical error b. Phase error 0exp( ( ) )z i E t dt = - Fluctuates 1. Need hardware error #12;Classical error correction by the software and hardware. , / 2 0 Hardware error correction: Ising
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01T23:59:59.000Z
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01T23:59:59.000Z
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
A posteriori error estimates, stopping criteria, and adaptivity for multiphase compositional Darcy derive a posteriori error estimates for the compositional model of multiphase Darcy flow in porous media, consisting of a system of strongly coupled nonlinear unsteady partial differential and algebraic equations
Outage Probability for Free-Space Optical Systems Over Slow Fading Channels With Pointing Errors
Hranilovic, Steve
Outage Probability for Free-Space Optical Systems Over Slow Fading Channels With Pointing Errors, Canada. Email: farid@grads.ece.mcmaster.ca, hranilovic@mcmaster.ca Abstract-- We investigate the outage errors. An expression for the outage probability is derived and we show that optimizing the transmit- ted
Communication error detection using facial expressions
Wang, Sy Bor, 1976-
2008-01-01T23:59:59.000Z
Automatic detection of communication errors in conversational systems typically rely only on acoustic cues. However, perceptual studies have indicated that speakers do exhibit visual communication error cues passively ...
Harmonic Analysis Errors in Calculating Dipole,
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to reduce the harmonic field calculation errors. A conformal transfor- mation of a multipole magnet into a dipole reduces these errors. Dipole Magnet Calculations A triangular...
Quantum Error Correction with magnetic molecules
José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas
2014-08-22T23:59:59.000Z
Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.
ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.
LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.
2004-07-26T23:59:59.000Z
We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.
Clark, E.L.
1993-08-01T23:59:59.000Z
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
Henry L. Haselgrove; Peter P. Rohde
2007-07-03T23:59:59.000Z
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected with high probability by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the quantum coherent information. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
Kernel Regression in the Presence of Correlated Errors Kernel Regression in the Presence in nonparametric regression is difficult in the presence of correlated errors. There exist a wide variety vector machines for regression. Keywords: nonparametric regression, correlated errors, bandwidth choice
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21T23:59:59.000Z
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Formalism for Simulation-based Optimization of Measurement Errors in High Energy Physics
Yuehong Xie
2009-04-29T23:59:59.000Z
Miminizing errors of the physical parameters of interest should be the ultimate goal of any event selection optimization in high energy physics data analysis involving parameter determination. Quick and reliable error estimation is a crucial ingredient for realizing this goal. In this paper we derive a formalism for direct evaluation of measurement errors using the signal probability density function and large fully simulated signal and background samples without need for data fitting and background modelling. We illustrate the elegance of the formalism in the case of event selection optimization for CP violation measurement in B decays. The implication of this formalism on choosing event variables for data analysis is discussed.
Uncertainty estimates for derivatives and intercepts
Clark, E.L.
1990-01-01T23:59:59.000Z
Straight line least squares fits of experimental data are widely used in the analysis of test results to provide derivatives and intercepts. A method for evaluating the uncertainty in these parameters is described. The method utilizes conventional least squares results and is applicable to experiments where the independent variable is controlled, but not necessarily free of error. A Monte Carlo verification of the method is given 7 refs., 2 tabs.
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01T23:59:59.000Z
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Estimating IMU heading error from SAR images.
Doerry, Armin Walter
2009-03-01T23:59:59.000Z
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
Original Article Error Bounds and Metric Subregularity
2014-06-18T23:59:59.000Z
theory of error bounds of extended real-valued functions. Another objective is to ... Another observation is that neighbourhood V in the original definition of metric.
Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)
Hodge, B. M.; Milligan, M.
2011-07-01T23:59:59.000Z
This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.
Error Mining on Dependency Trees Claire Gardent
Paris-Sud XI, Université de
Error Mining on Dependency Trees Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-l`es-Nancy, F-54600, France shashi.narayan@loria.fr Abstract In recent years, error mining approaches were propose an algorithm for mining trees and ap- ply it to detect the most likely sources of gen- eration
SEU induced errors observed in microprocessor systems
Asenek, V.; Underwood, C.; Oldfield, M. [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre] [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre; Velazco, R.; Rezgui, S.; Cheynet, P. [TIMA Lab., Grenoble (France)] [TIMA Lab., Grenoble (France); Ecoffet, R. [Centre National d`Etudes Spatiales, Toulouse (France)] [Centre National d`Etudes Spatiales, Toulouse (France)
1998-12-01T23:59:59.000Z
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Inference for Model Error Allan Seheult
Oakley, Jeremy
Reservoirs, Model Error, Reification, Thermohaline Circulation. 1 Introduction Mathematical models of complex that the uncertainties associated with both calibrating a mathematical model to observations on a physical system specification exercise of model error with the cosmologists, linked to an extensive analysis of model
Nonparametric Regression with Correlated Errors Jean Opsomer
Wang, Yuedong
Nonparametric Regression with Correlated Errors Jean Opsomer Iowa State University Yuedong Wang Nonparametric regression techniques are often sensitive to the presence of correlation in the errors splines and wavelet regression under correlation, both for short-range and long-range dependence
Stabilizer Formalism for Operator Quantum Error Correction
David Poulin
2006-06-14T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 4 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling
Louisiana State University; Balman, Mehmet; Kosar, Tevfik
2010-10-27T23:59:59.000Z
Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.
T. Matolcsi; P. Van
2006-10-23T23:59:59.000Z
A four dimensional treatment of nonrelativistic space-time gives a natural frame to deal with objective time derivatives. In this framework some well known objective time derivatives of continuum mechanics appear as Lie-derivatives. Their coordinatized forms depends on the tensorial properties of the relevant physical quantities. We calculate the particular forms of objective time derivatives for scalars, vectors, covectors and different second order tensors from the point of view of a rotating observer. The relation of substantial, material and objective time derivatives is treated.
1. DON'T confuse integral with derivative: ? x 2 dx = x ?3 ? 1 + C ...
2012-02-18T23:59:59.000Z
Common Error to Quiz 5. 1. DON'T confuse integral with derivative: ? x. ?3. 2 dx = x. ?3. 2. ?1. ?3. 2. ? 1. + C = ?. 2. 5 x. ?5. 2 + C. Instead,. ? x. ?3. 2 dx = x. ?3.
Quantum error-correcting codes and devices
Gottesman, Daniel (Los Alamos, NM)
2000-10-03T23:59:59.000Z
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Organizational Errors: Directions for Future Research
Carroll, John Stephen
The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...
Errors and paradoxes in quantum mechanics
D. Rohrlich
2007-08-28T23:59:59.000Z
Errors and paradoxes in quantum mechanics, entry in the Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy, ed. F. Weinert, K. Hentschel, D. Greenberger and B. Falkenburg (Springer), to appear
Agility metric sensitivity using linear error theory
Smith, David Matthew
2000-01-01T23:59:59.000Z
Aircraft agility metrics have been proposed for use to measure the performance and capability of aircraft onboard while in-flight. The sensitivity of these metrics to various types of errors and uncertainties is not ...
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian
2012-05-01T23:59:59.000Z
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
Hamlen, Kevin W.
Investigating SANS/CWE Top 25 Programming Errors. 1 Investigating the SANS/CWE Top 25 Programming Errors List Running Title: Investigating SANS/CWE Top 25 Programming Errors. Investigating the SANS;Investigating SANS/CWE Top 25 Programming Errors. 2 Investigating the SANS/CWE Top 25 Programming Errors List
Deriving Displacement from a 3 axis Accelerometer Mr. Andrew Blake
Winstanley, Graham
Deriving Displacement from a 3 axis Accelerometer Mr. Andrew Blake University of Brighton CMIS, Additive 1. Introduction The Nintendo WiiTM, Sony's Playstation 3TM and Microsoft's Xbox 360TM all feature a 1000 seconds is 1,000,000 times greater than that at 1 second. Any small offset errors
Error Detection and Recovery for Robot Motion Planning with Uncertainty
Donald, Bruce Randall
1987-07-01T23:59:59.000Z
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has ...
A systems approach to reducing utility billing errors
Ogura, Nori
2013-01-01T23:59:59.000Z
Many methods for analyzing the possibility of errors are practiced by organizations who are concerned about safety and error prevention. However, in situations where the error occurrence is random and difficult to track, ...
Global Error bounds for systems of convex polynomials over ...
2011-11-11T23:59:59.000Z
This paper is devoted to study the Lipschitzian/Holderian type global error ...... set is not neccessarily compact, we obtain the Hölder global error bound result.
Running jobs error: "inet_arp_address_lookup"
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
jobs error: "inetarpaddresslookup" Resolved: Running jobs error: "inetarpaddresslookup" September 22, 2013 by Helen He (0 Comments) Symptom: After the Hopper August 14...
Neutron multiplication error in TRU waste measurements
Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP
2009-01-01T23:59:59.000Z
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are more realistic and accurate. To do so, measurements of standards and waste drums were performed with High Efficiency Neutron Counters (HENC) located at Los Alamos National Laboratory (LANL). The data were analyzed for multiplication effects and new estimates of the multiplication error were computed. A concluding section will present alternatives for reducing the number of rejections of TRU waste containers due to neutron multiplication error.
Shared dosimetry error in epidemiological dose-response analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23T23:59:59.000Z
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
Optimal error estimates for corrected trapezoidal rules
Talvila, Erik
2012-01-01T23:59:59.000Z
Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
New insights on numerical error in symplectic integration
Hugo Jiménez-Pérez; Jean-Pierre Vilotte; Barbara Romanowicz
2015-08-13T23:59:59.000Z
We implement and investigate the numerical properties of a new family of integrators connecting both variants of the symplectic Euler schemes, and including an alternative to the classical symplectic mid-point scheme, with some additional terms. This family is derived from a new method, introduced in a previous study, for generating symplectic integrators based on the concept of special symplectic manifold. The use of symplectic rotations and a particular type of projection keeps the whole procedure within the symplectic framework. We show that it is possible to define a set of parameters that control the additional terms providing a way of "tuning" these new symplectic schemes. We test the "tuned" symplectic integrators with the perturbed pendulum and we compare its behavior with an explicit scheme for perturbed systems. Remarkably, for the given examples, the error in the energy integral can be reduced considerably. There is a natural geometrical explanation, sketched at the end of this paper. This is the subject of a parallel article where a finer analysis is performed. Numerical results obtained in this paper open a new point of view on symplectic integrators and Hamiltonian error.
Mather, Mara
Running head: STEREOTYPE THREAT REDUCES MEMORY ERRORS Stereotype threat can reduce older adults, 90089-0191. Phone: 213-740-6772. Email: barbersa@usc.edu #12;STEREOTYPE THREAT REDUCES MEMORY ERRORS 2 Abstract (144 words) Stereotype threat often incurs the cost of reducing the amount of information
On the Error in QR Integration
Dieci, Luca; Van Vleck, Erik
2008-03-07T23:59:59.000Z
] . . . [R(t2, t1) +E2][R(t1, t0) +E1]R(t0) , k = 1, 2, . . . , where Q(tk) is the exact Q-factor at tk and the triangular transitions R(tj , tj?1) are also the exact ones. Moreover, the factors Ej , j = 1, . . . , k, are bounded in norm by the local error... committed during integration of the relevant differential equations; see Theorems 3.1 and 3.16.” We will henceforth simply write (2.7) ?Ej? ? ?, j = 1, 2, . . . , and stress that ? is computable, in fact controllable, in terms of local error tolerances...
Recent experiences with error estimation and adaptivity
Haque, Khalid Ansar
1991-01-01T23:59:59.000Z
RECENT EXPERIENCES WITH ERROR ESTIMATION AND ADAPTIVITY A Thesis by K HA LID ANSA R I I A & )UE Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE December 1991 Major Subject: Aerospace Engineering RECENT EXPERIENCES WITH ERROR ESTIMATION AND ADAPTIVITY A Thesis by KHALID ANSAR HAQUE Approved as to style and content by: W b4 f. ou Lou (i s T. Strouboulis (Chair of Committee) W. E...
Laser Phase Errors in Seeded FELs
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-03-28T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28T23:59:59.000Z
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
High Performance Dense Linear System Solver with Soft Error Resilience
Dongarra, Jack
High Performance Dense Linear System Solver with Soft Error Resilience Peng Du, Piotr Luszczek systems, and in some scientific applications C/R is not applicable for soft error at all due to error) high performance dense linear system solver with soft error resilience. By adopting a mathematical
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01T23:59:59.000Z
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
Lateral boundary errors in regional numerical weather
?umer, Slobodan
Lateral boundary errors in regional numerical weather prediction models Author: Ana Car Advisor weather services for short- range forecasts. These models are covering smaller areas with higher reso Introduction Equations for numerical weather prediction (NWP) are mathematical represen- ation of physical
MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS
Heinemann, Detlev
commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head
Definitions Derived from Neutrosophics
Florentin Smarandache
2003-01-28T23:59:59.000Z
Thirty-three new definitions are presented, derived from neutrosophic set, neutrosophic probability, neutrosophic statistics, and neutrosophic logic. Each one is independent, short, with references and cross references like in a dictionary style.
Lin, Shaowei
2014-07-02T23:59:59.000Z
The enactment of derivative action was expected to be actively used by shareholders to protect their interests. In fact, it turned out that this reform effort seemed futile as the right to engage in such actions was ...
Rüther, Henrique
2007-01-01T23:59:59.000Z
The amounts outstanding of credit derivatives have grown exponentially over the past years, and these financial intruments that allow market participants to trade credit risk have become very popular in Europe and in the ...
Makarenkov, Vladimir
- mentaldatarequiresan efficientautomaticroutinefor theselection of hits. Unfortunately, random and systematic errors can
Tradeoff between energy and error in the discrimination of quantum-optical devices
Alessandro Bisio; Michele Dall'Arno; Giacomo Mauro D'Ariano
2011-07-11T23:59:59.000Z
We address the problem of energy-error tradeoff in the discrimination between two linear passive quantum optical devices with a single use. We provide an analytical derivation of the optimal strategy for beamsplitters and an iterative algorithm converging to the optimum in the general case. We then compare the optimal strategy with a simpler strategy using coherent input states and homodyne detection. It turns out that the former requires much less energy in order to achieve the same performances.
Quantum Latin squares and unitary error bases
Benjamin Musto; Jamie Vicary
2015-04-10T23:59:59.000Z
In this paper we introduce quantum Latin squares, combinatorial quantum objects which generalize classical Latin squares, and investigate their applications in quantum computer science. Our main results are on applications to unitary error bases (UEBs), basic structures in quantum information which lie at the heart of procedures such as teleportation, dense coding and error correction. We present a new method for constructing a UEB from a quantum Latin square equipped with extra data. Developing construction techniques for UEBs has been a major activity in quantum computation, with three primary methods proposed: shift-and-multiply, Hadamard, and algebraic. We show that our new approach simultaneously generalizes the shift-and-multiply and Hadamard methods. Furthermore, we explicitly construct a UEB using our technique which we prove cannot be obtained from any of these existing methods.
Gross error detection in process data
Singh, Gurmeet
1992-01-01T23:59:59.000Z
, 1991), with many optimum properties, seems to have been untapped by chemical engineers. We first review the background of the Tr test, and present relevant properties of the test. IV. A Hotelling's Generalization of Students t Test One of the most...: Chemical Engineering GROSS ERROR DETECTION IN PROCESS DATA A Thesis by GURMEET SINGH Approved as to style and content by: Ralph E. White (Chair of Committee) Michael Nikoloau (Member Richard B. Gri n (Member) R. W. Flummerfelt (Head...
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-25T23:59:59.000Z
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
Message passing in fault tolerant quantum error correction
Z. W. E. Evans; A. M. Stephens
2008-06-13T23:59:59.000Z
Inspired by Knill's scheme for message passing error detection, here we develop a scheme for message passing error correction for the nine-qubit Bacon-Shor code. We show that for two levels of concatenated error correction, where classical information obtained at the first level is used to help interpret the syndrome at the second level, our scheme will correct all cases with four physical errors. This results in a reduction of the logical failure rate relative to conventional error correction by a factor proportional to the reciprocal of the physical error rate.
Derived Azumaya algebras and generators for twisted derived categories
Toen, Bertrand
implies the existence of a global compact generator. We present explicit examples of derived Azumaya
Human error contribution to nuclear materials-handling events
Sutton, Bradley (Bradley Jordan)
2007-01-01T23:59:59.000Z
This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...
Evolved Error Management Biases in the Attribution of Anger
Galperin, Andrew
2012-01-01T23:59:59.000Z
von Hippel, W. , Poore, J. C. , Buss, D. M. , et al. (under27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19T23:59:59.000Z
to the models considered in Tsiatis and Ma (2004), our model is less stringent because it allows an unspecified model error distribution and unspecified covariate distribution, not just the latter. With an unspecified model error distribution, the RMM... with measurement error is a very different problem compared to the model considered in Tsiatis and Ma (2004), where the model error distribution has a known parametric form. Consequently, the semiparamet- ric treatment here is also drastically different. Our...
Pushing schedule derivation method
Henriquez, B. [Compania Siderurgica Huachipato S.A., Talcahuano (Chile)
1996-12-31T23:59:59.000Z
The development of a Pushing Schedule Derivation Method has allowed the company to sustain the maximum production rate at CSH`s Coke Oven Battery, in spite of having single set oven machinery with a high failure index as well as a heat top tendency. The stated method provides for scheduled downtime of up to two hours for machinery maintenance purposes, periods of empty ovens for decarbonization and production loss recovery capability, while observing lower limits and uniformity of coking time.
Franklin Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article) |govInstrumentsmfrirt DocumentationSitesWeather6Environmental1 |MAgnEt forFirstFourth FridayTrouble Shooting and Error
Error Analysis in Nuclear Density Functional Theory
Nicolas Schunck; Jordan D. McDonnell; Jason Sarich; Stefan M. Wild; Dave Higdon
2014-07-11T23:59:59.000Z
Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.
A Taxonomy of Number Entry Error Sarah Wiseman
Subramanian, Sriram
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
Susceptibility of Commodity Systems and Software to Memory Soft Errors
Riska, Alma
Susceptibility of Commodity Systems and Software to Memory Soft Errors Alan Messer, Member, IEEE Abstract--It is widely understood that most system downtime is acounted for by programming errors transient errors in computer system hardware due to external factors, such as cosmic rays. This work
Predictors of Threat and Error Management: Identification of Core
Predictors of Threat and Error Management: Identification of Core Nontechnical Skills In normal flight operations, crews are faced with a variety of external threats and commit a range of errors of these threats and errors therefore forms an essential element of enhancing performance and minimizing risk
Bolstered Error Estimation Ulisses Braga-Neto a,c
Braga-Neto, Ulisses
the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2004-01-01T23:59:59.000Z
Current-controlled logic and single electron logic processors have been investigated with respect to thermal-induced bit error. A maximal error rate for both logic processors is regarded as one bit-error/year/chip. A maximal clock frequency...
Polian, Ilia
of soft errors in modern microprocessors has been reported to never lead to a system failure. Any techniques are enhanced by a methodology to handle soft errors on address bits. Furthermore, we demonstrate]. Consequently, many state-of-the art systems provide soft error detection and correction capabilities [Hass 89
Technological Advancements and Error Rates in Radiation Therapy Delivery
Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States)
2011-11-15T23:59:59.000Z
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.
Locked modes and magnetic field errors in MST
Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.
1992-06-01T23:59:59.000Z
In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.
Plasma dynamics and a significant error of macroscopic averaging
Marek A. Szalek
2005-05-22T23:59:59.000Z
The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.
Error analysis of nuclear forces and effective interactions
R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola
2014-09-04T23:59:59.000Z
The Nucleon-Nucleon interaction is the starting point for ab initio Nuclear Structure and Nuclear reactions calculations. Those are effectively carried out via effective interactions fitting scattering data up to a maximal center of mass momentum. However, NN interactions are subjected to statistical and systematic uncertainties which are expected to propagate and have some impact on the predictive power and accuracy of theoretical calculations, regardless on the numerical accuracy of the method used to solve the many body problem. We stress the necessary conditions required for a correct and self-consistent statistical interpretation of the discrepancies between theory and experiment which enable a subsequent statistical error propagation and correlation analysis. We comprehensively discuss an stringent and recently proposed tail-sensitive normality test and provide a simple recipe to implement it. As an application, we analyze the deduced uncertainties and correlations of effective interactions in terms of Moshinsky-Skyrme parameters and effective field theory counterterms as derived from the bare NN potential containing One-Pion-Exchange and Chiral Two-Pion-Exchange interactions inferred from scattering data.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01T23:59:59.000Z
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors
Przasnyski, Zbigniew; Seal, Kala Chand
2011-01-01T23:59:59.000Z
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.
Integrating human related errors with technical errors to determine causes behind offshore accidents
Aamodt, Agnar
errors were embedded as an integral part of the oil well drilling opera- tion. To reduce the number assessment of the failure. The method is based on a knowledge model of the oil-well drilling process. All of non-productive time (NPT) during oil-well drilling. NPT exhibits a much lower declining trend than
Hess-Flores, M
2011-11-10T23:59:59.000Z
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.
Output error identification of hydrogenerator conduit dynamics
Vogt, M.A.; Wozniak, L. (Illinois Univ., Urbana, IL (USA)); Whittemore, T.R. (Bureau of Reclamation, Denver, CO (USA))
1989-09-01T23:59:59.000Z
Two output error model reference adaptive identifiers are considered for estimating the parameters in a reduced order gate position to pressure model for the hydrogenerator. This information may later be useful in an adaptive controller. Gradient and sensitivity functions identifiers are discussed for the hydroelectric application and connections are made between their structural differences and relative performance. Simulations are presented to support the conclusion that the latter algorithm is more robust, having better disturbance rejection and less plant model mismatch sensitivity. For identification from recorded plant data from step gate inputs, the other algorithm even fails to converge. A method for checking the estimated parameters is developed by relating the coefficients in the reduced order model to head, an externally measurable parameter.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M [ORNL] [ORNL; Walker, William C [ORNL] [ORNL
2014-01-01T23:59:59.000Z
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Huang, Weidong
2011-01-01T23:59:59.000Z
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Xiaofeng Wu; Guanrong Chen; Jianping Cai
2008-07-14T23:59:59.000Z
This paper provides a unified method for analyzing chaos synchronization of the generalized Lorenz systems. The considered synchronization scheme consists of identical master and slave generalized Lorenz systems coupled by linear state error variables. A sufficient synchronization criterion for a general linear state error feedback controller is rigorously proven by means of linearization and Lyapunov's direct methods. When a simple linear controller is used in the scheme, some easily implemented algebraic synchronization conditions are derived based on the upper and lower bounds of the master chaotic system. These criteria are further optimized to improve their sharpness. The optimized criteria are then applied to four typical generalized Lorenz systems, i.e. the classical Lorenz system, the Chen system, the Lv system and a unified chaotic system, obtaining precise corresponding synchronization conditions. The advantages of the new criteria are revealed by analytically and numerically comparing their sharpness with that of the known criteria existing in the literature.
P. A. Sturrock; J. D. Scargle
2006-06-20T23:59:59.000Z
The purpose of this article is to carry out a power-spectrum analysis (based on likelihood methods) of the Super-Kamiokande 5-day dataset that takes account of the asymmetry in the error estimates. Whereas the likelihood analysis involves a linear optimization procedure for symmetrical error estimates, it involves a nonlinear optimization procedure for asymmetrical error estimates. We find that for most frequencies there is little difference between the power spectra derived from analyses of symmetrized error estimates and from asymmetrical error estimates. However, this proves not to be the case for the principal peak in the power spectra, which is found at 9.43 yr-1. A likelihood analysis which allows for a "floating offset" and takes account of the start time and end time of each bin and of the flux estimate and the symmetrized error estimate leads to a power of 11.24 for this peak. A Monte Carlo analysis shows that there is a chance of only 1% of finding a peak this big or bigger in the frequency band 1 - 36 yr-1 (the widest band that avoids artificial peaks). On the other hand, an analysis that takes account of the error asymmetry leads to a peak with power 13.24 at that frequency. A Monte Carlo analysis shows that there is a chance of only 0.1% of finding a peak this big or bigger in that frequency band 1 - 36 yr-1. From this perspective, power spectrum analysis that takes account of asymmetry of the error estimates gives evidence for variability that is significant at the 99.9% level. We comment briefly on an apparent discrepancy between power spectrum analyses of the Super-Kamiokande and SNO solar neutrino experiments.
Error models in quantum computation: an application of model selection
Lucia Schwarz; Steven van Enk
2013-09-04T23:59:59.000Z
Threshold theorems for fault-tolerant quantum computing assume that errors are of certain types. But how would one detect whether errors of the "wrong" type occur in one's experiment, especially if one does not even know what type of error to look for? The problem is that for many qubits a full state description is impossible to analyze, and a full process description is even more impossible to analyze. As a result, one simply cannot detect all types of errors. Here we show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. The example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size.
A two reservoir model of quantum error correction
James P. Clemens; Julio Gea-Banacloche
2005-08-22T23:59:59.000Z
We consider a two reservoir model of quantum error correction with a hot bath causing errors in the qubits and a cold bath cooling the ancilla qubits to a fiducial state. We consider error correction protocols both with and without measurement of the ancilla state. The error correction acts as a kind of refrigeration process to maintain the data qubits in a low entropy state by periodically moving the entropy to the ancilla qubits and then to the cold reservoir. We quantify the performance of the error correction as a function of the reservoir temperatures and cooling rate by means of the fidelity and the residual entropy of the data qubits. We also make a comparison with the continuous quantum error correction model of Sarovar and Milburn [Phys. Rev. A 72 012306].
Trial application of a technique for human error analysis (ATHEANA)
Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Parry, G.W. [NUS, Gaithersburg, MD (United States)] [and others
1996-10-01T23:59:59.000Z
The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Per Carlson; Conny Wannemark
2005-05-10T23:59:59.000Z
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Error estimates for the Euler discretization of an optimal control ...
Joseph FrÃ©dÃ©ric Bonnans
2014-12-10T23:59:59.000Z
Dec 10, 2014 ... Abstract: We study the error introduced in the solution of an optimal control problem with first order state constraints, for which the trajectories ...
Identification of toroidal field errors in a modified betatron accelerator
Loschialpo, P. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States)); Marsh, S.J. (SFA Inc., Landover, Maryland 20785 (United States)); Len, L.K.; Smith, T. (FM Technologies Inc., 10529-B Braddock Road, Fairfax, Virginia 22032 (United States)); Kapetanakos, C.A. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States))
1993-06-01T23:59:59.000Z
A newly developed probe, having a 0.05% resolution, has been used to detect errors in the toroidal magnetic field of the NRL modified betatron accelerator. Measurements indicate that the radial field components (errors) are 0.1%--1% of the applied toroidal field. Such errors, in the typically 5 kG toroidal field, can excite resonances which drive the beam to the wall. Two sources of detected field errors are discussed. The first is due to the discrete nature of the 12 single turn coils which generate the toroidal field. Both measurements and computer calculations indicate that its amplitude varies from 0% to 0.2% as a function of radius. Displacement of the outer leg of one of the toroidal field coils by a few millimeters has a significant effect on the amplitude of this field error. Because of uniform toroidal periodicity of these coils this error is a good suspect for causing the excitation of the damaging [ital l]=12 resonance seen in our experiments. The other source of field error is due to the current feed gaps in the vertical magnetic field coils. A magnetic field is induced inside the vertical field coils' conductor in the opposite direction of the applied toroidal field. Fringe fields at the gaps lead to additional field errors which have been measured as large as 1.0%. This source of field error, which exists at five toroidal locations around the modified betatron, can excite several integer resonances, including the [ital l]=12 mode.
On Error Estimates of the Penalty Method for Unsteady Navier ...
Nov 26, 2002 ... http://WWW.jstor.org/about/terms.html. ... However, the best error estimates available to the author's knowledge" ... AMS subject classi?cations.
New Fractional Error Bounds for Polynomial Systems with ...
2014-07-27T23:59:59.000Z
techniques are largely based on variational analysis and generalized differentiation, ...... Example 3.10 (failure of global error bounds for polynomial systems).
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01T23:59:59.000Z
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
Kinematic Error Correction for Minimally Invasive Surgical Robots
in two likely sources of kinematic error: port displacement and instrument shaft flexion. For a quasi. To reach the surgical site near the chest wall, the instrument shaft applies significant torque to the port, and the instrument shaft to bend. These kinematic errors impair positioning of the robot and cause deviations from
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error
Richardson, David
may not even exist. For this reason we investigate error detection and recovery (EDR) strategies. We may not even exist. For this reason we investigate error detection and recovery (EDR ) strategies. We and implementational questions remain. The second contribution is a formal, geometric approach to EDR. While EDR
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Bai, Zhaojun
gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance for integrated groundwater models, which are implicitly coupled to another model, such as surface water models the correspondence between the residual error in the preconditioned linear system and the solution error. Using
Numerical Construction of Likelihood Distributions and the Propagation of Errors
J. Swain; L. Taylor
1997-12-12T23:59:59.000Z
The standard method for the propagation of errors, based on a Taylor series expansion, is approximate and frequently inadequate for realistic problems. A simple and generic technique is described in which the likelihood is constructed numerically, thereby greatly facilitating the propagation of errors.
Mining API Error-Handling Specifications from Source Code
Xie, Tao
Mining API Error-Handling Specifications from Source Code Mithun Acharya and Tao Xie Department it difficult to mine error-handling specifications through manual inspection of source code. In this paper, we, without any user in- put. In our framework, we adapt a trace generation technique to distinguish
Calibration and Error in Placental Molecular Clocks: A Conservative
Hadly, Elizabeth
Calibration and Error in Placental Molecular Clocks: A Conservative Approach Using for calibrating both mitogenomic and nucleogenomic placental timescales. We applied these reestimates to the most calibration error may inflate the power of the molecular clock when testing the time of ordinal
Error detection through consistency checking Peng Gong* Lan Mu#
Silver, Whendee
Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar*
Potkonjak, Miodrag
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar* , Sasha Slijepcevic , Miodrag is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined in order applications, including multi- modal sensor fusion, is to ensure that all of the techniques and tools are error
Mutual information, bit error rate and security in Wójcik's scheme
Zhanjun Zhang
2004-02-21T23:59:59.000Z
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.
Kernel Regression with Correlated Errors K. De Brabanter
Kernel Regression with Correlated Errors K. De Brabanter , J. De Brabanter , , J.A.K. Suykens B: It is a well-known problem that obtaining a correct bandwidth in nonparametric regression is difficult support vector machines for regression. Keywords: nonparametric regression, correlated errors, short
Ridge Regression Estimation Approach to Measurement Error Model
Shalabh
Ridge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton of the regression parameters is ill conditioned. We consider the Hoerl and Kennard type (1970) ridge regression (RR) modifications of the five quasi- empirical Bayes estimators of the regression parameters of a measurement error
Solving LWE problem with bounded errors in polynomial time
International Association for Cryptologic Research (IACR)
Solving LWE problem with bounded errors in polynomial time Jintai Ding1,2 Southern Chinese call the learning with bounded errors (LWBE) problems, we can solve it with complexity O(nD ). Keywords, this problem corresponds to the learning parity with noise (LPN) problem. There are several ways to solve
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq
2010-01-01T23:59:59.000Z
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...
Grid-scale Fluctuations and Forecast Error in Wind Power
Bel, G; Toots, M; Bandi, M M
2015-01-01T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
An Efficient Approach towards Mitigating Soft Errors Risks
Sadi, Muhammad Sheikh; Uddin, Md Nazim; Jürjens, Jan
2011-01-01T23:59:59.000Z
Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today's nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant 'device shrinking' reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only 'critical code blocks' and 'critical variables' rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.
Grid-scale Fluctuations and Forecast Error in Wind Power
G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi
2015-03-29T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center
1997-10-01T23:59:59.000Z
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01T23:59:59.000Z
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore »measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study,Taqpolymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, clonedPfupolymerase, Phusion Hot Start, andPwopolymerase, we find the lowest error rates withPfu, Phusion, andPwopolymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed withTaqpolymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Logical Error Rate Scaling of the Toric Code
Fern H. E. Watson; Sean D. Barrett
2014-09-26T23:59:59.000Z
To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead -- the total number of physical qubits required to perform error correction.
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-01-13T23:59:59.000Z
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-06-30T23:59:59.000Z
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01T23:59:59.000Z
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01T23:59:59.000Z
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Higher-derivative Schwinger model
Amaral, R.L.P.G.; Belvedere, L.V.; Lemos, N.A. (Instituto de Fisica, Universidade Federal Fluminense, Outeiro de Sao Joao Batista s/n, 24020 Centro, Niteroi, Rio de Janeiro (Brazil)); Natividade, C.P. (Departamento de Matematica, Universidade Estadual Paulista, Campus de Guaratingueta, 12500 Sao Paulo, Sao Paulo (Brazil))
1993-04-15T23:59:59.000Z
Using the operator formalism, we obtain the bosonic representation for the free fermion field satisfying an equation of motion with higher-order derivatives. Then, we consider the operator solution of a generalized Schwinger model with higher-derivative coupling. Since the increasing of the derivative order implies the introduction of an equivalent number of extra fermionic degrees of freedom, the mass acquired by the gauge field is bigger than the one for the standard two-dimensional QED. An analysis of the problem from the functional integration point of view corroborates the findings of canonical quantization, and corrects certain results previously announced in the literature on the basis of Fujikawa's technique.
Complex higher order derivative theories
Margalli, Carlos A.; Vergara, J. David [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico 04510 DF (Mexico)
2012-08-24T23:59:59.000Z
In this work is considered a complex scalar field theory with higher order derivative terms and interactions. A procedure is developed to quantize consistently this system avoiding the presence of negative norm states. In order to achieve this goal the original real scalar high order field theory is extended to a complex space attaching a complex total derivative to the theory. Next, by imposing reality conditions the complex theory is mapped to a pair of interacting real scalar field theories without the presence of higher derivative terms.
Servo control booster system for minimizing following error
Wise, William L. (Mountain View, CA)
1985-01-01T23:59:59.000Z
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Sensitivity of OFDM Systems to Synchronization Errors and Spatial Diversity
Zhou, Yi
2012-02-14T23:59:59.000Z
jitter cause inter-carrier interference. The overall system performance in terms of symbol error rate is limited by the inter-carrier interference. For a reliable information reception, compensatory measures must be taken. The second part...
Universally Valid Error-Disturbance Relations in Continuous Measurements
Atsushi Nishizawa; Yanbei Chen
2015-05-31T23:59:59.000Z
In quantum physics, measurement error and disturbance were first naively thought to be simply constrained by the Heisenberg uncertainty relation. Later, more rigorous analysis showed that the error and disturbance satisfy more subtle inequalities. Several versions of universally valid error-disturbance relations (EDR) have already been obtained and experimentally verified in the regimes where naive applications of the Heisenberg uncertainty relation failed. However, these EDRs were formulated for discrete measurements. In this paper, we consider continuous measurement processes and obtain new EDR inequalities in the Fourier space: in terms of the power spectra of the system and probe variables. By applying our EDRs to a linear optomechanical system, we confirm that a tradeoff relation between error and disturbance leads to the existence of an optimal strength of the disturbance in a joint measurement. Interestingly, even with this optimal case, the inequality of the new EDR is not saturated because of doublely existing standard quantum limits in the inequality.
Robust mixtures in the presence of measurement errors
Jianyong Sun; Ata Kaban; Somak Raychaudhury
2007-09-06T23:59:59.000Z
We develop a mixture-based approach to robust density modeling and outlier detection for experimental multivariate data that includes measurement error information. Our model is designed to infer atypical measurements that are not due to errors, aiming to retrieve potentially interesting peculiar objects. Since exact inference is not possible in this model, we develop a tree-structured variational EM solution. This compares favorably against a fully factorial approximation scheme, approaching the accuracy of a Markov-Chain-EM, while maintaining computational simplicity. We demonstrate the benefits of including measurement errors in the model, in terms of improved outlier detection rates in varying measurement uncertainty conditions. We then use this approach in detecting peculiar quasars from an astrophysical survey, given photometric measurements with errors.
Predicting Intentional Tax Error Using Open Source Literature and Data
for each PUMS respondent (or agent), in certain line item/taxpayer categories, allowing us to construct dis-Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Results of Meta-Analysis 12 6 Intentional Error in Line Items/Taxpayer Categories 13 6
Diagnosing multiplicative error by lensing magnification of type Ia supernovae
Zhang, Pengjie
2015-01-01T23:59:59.000Z
Weak lensing causes spatially coherent fluctuations in flux of type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to measure accurately in auto correlation, its cross correlation with cosmic shear and galaxy distribution in overlapping area can be measured to significantly higher accuracy. Therefore these cross correlations can put useful constraint on multiplicative error, and the obtained constraint is free of cosmic variance in weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with $\\sim 1$ million SNe Ia that can be achieved by the proposed D2k survey with the LSST telescope (Zhan et al. 2008), multiplicative error of $\\sim 0.5\\%$ for source galaxies at $z_s\\sim 1$ can be detected and la...
Inflated applicants: Attribution errors in performance evaluation by professionals
Swift, Samuel; Moore, Don; Sharek, Zachariah; Gino, Francesca
2013-01-01T23:59:59.000Z
performance among applicants from each ‘‘type’’ of school.and interview performance. Each school provided multi-yearschool, PLOS ONE | www.plosone.org July 2013 | Volume 8 | Issue 7 | e69258 Attribution Errors in Performance
Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola
Oregon, University of
of the pyranometer to briefly shade the pyranometer once a minute. Direct hori- zontal irradiance is calculated used in programs evaluating the performance of photovoltaic systems, and systematic errors in the data
Honest Confidence Intervals for the Error Variance in Stepwise Regression
Stine, Robert A.
Honest Confidence Intervals for the Error Variance in Stepwise Regression Dean P. Foster and Robert alternatives are used. These simpler algorithms (e.g., forward or backward stepwise regression) obtain
Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint
Hodge, B. M.; Milligan, M.
2011-03-01T23:59:59.000Z
In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan
Kaeli, David R.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction
TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit
Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired
Suboptimal quantum-error-correcting procedure based on semidefinite programming
Naoki Yamamoto; Shinji Hara; Koji Tsumura
2006-06-13T23:59:59.000Z
In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.
Mesoscale predictability and background error convariance estimation through ensemble forecasting
Ham, Joy L
2002-01-01T23:59:59.000Z
MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE December 2002 Major Subject: Atmospheric Sciences MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies...
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)
2012-07-03T23:59:59.000Z
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Fault-Tolerant Thresholds for Encoded Ancillae with Homogeneous Errors
Bryan Eastin
2006-11-14T23:59:59.000Z
I describe a procedure for calculating thresholds for quantum computation as a function of error model given the availability of ancillae prepared in logical states with independent, identically distributed errors. The thresholds are determined via a simple counting argument performed on a single qubit of an infinitely large CSS code. I give concrete examples of thresholds thus achievable for both Steane and Knill style fault-tolerant implementations and investigate their relation to threshold estimates in the literature.
Mesoscale predictability and background error convariance estimation through ensemble forecasting
Ham, Joy L
2002-01-01T23:59:59.000Z
MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE December 2002 Major Subject: Atmospheric Sciences MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies...
Coding Techniques for Error Correction and Rewriting in Flash Memories
Mohammed, Shoeb Ahmed
2010-10-12T23:59:59.000Z
CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2010 Major Subject: Electrical Engineering CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial...
Compiler-Assisted Detection of Transient Memory Errors
Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-06-09T23:59:59.000Z
The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.
EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITY OF DIPOLE MAGNETS FOR THE SSC
Meuser, R.B.
2010-01-01T23:59:59.000Z
in Fig. 2. Table 2. Manufacturing Error Mode Groups13-16, 1985 EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITYMag Note-27 EFFECT OF MANUFACTURING ERRORS ON FIELO QUALITY
A new and efficient error resilient entropy code for image and video compression
Min, Jungki
1999-01-01T23:59:59.000Z
Image and video compression standards such as JPEG, MPEG, H.263 are severely sensitive to errors. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization causes the worst result. Even one bit error...
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors
Sarkar, Abhra
2014-06-24T23:59:59.000Z
Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with errors-in-covariates are available ...
National Nuclear Security Administration (NNSA)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23Tribal EnergyCatalytic Coby Mods 002, 006, 020,holiday |Nuclear Security09 | National Nucleardocument
V-228: RealPlayer Buffer Overflow and Memory Corruption Error...
Broader source: Energy.gov (indexed) [DOE]
a memory corruption error and execute arbitrary code on the target system. IMPACT: Access control error SOLUTION: vendor recommends upgrading to version 16.0.3.51 Addthis...
Sigman, Michael E.; Dindal, Amy B.
2003-11-11T23:59:59.000Z
Described is a method for producing copolymerized sol-gel derived sorbent particles for the production of copolymerized sol-gel derived sorbent material. The method for producing copolymerized sol-gel derived sorbent particles comprises adding a basic solution to an aqueous metal alkoxide mixture for a pH.ltoreq.8 to hydrolyze the metal alkoxides. Then, allowing the mixture to react at room temperature for a precalculated period of time for the mixture to undergo an increased in viscosity to obtain a desired pore size and surface area. The copolymerized mixture is then added to an immiscible, nonpolar solvent that has been heated to a sufficient temperature wherein the copolymerized mixture forms a solid upon the addition. The solid is recovered from the mixture, and is ready for use in an active sampling trap or activated for use in a passive sampling trap.
Magnetic cellulose-derivative structures
Walsh, Myles A. (Falmouth, MA); Morris, Robert S. (Fairhaven, MA)
1986-09-16T23:59:59.000Z
Structures to serve as selective magnetic sorbents are formed by dissolving a cellulose derivative such as cellulose triacetate in a solvent containing magnetic particles. The resulting solution is sprayed as a fine mist into a chamber containing a liquid coagulant such as n-hexane in which the cellulose derivative is insoluble but in which the coagulant is soluble or miscible. On contact with the coagulant, the mist forms free-flowing porous magnetic microspheric structures. These structures act as containers for the ion-selective or organic-selective sorption agent of choice. Some sorbtion agents can be incorporated during the manufacture of the structure.
Magnetic cellulose-derivative structures
Walsh, M.A.; Morris, R.S.
1986-09-16T23:59:59.000Z
Structures to serve as selective magnetic sorbents are formed by dissolving a cellulose derivative such as cellulose triacetate in a solvent containing magnetic particles. The resulting solution is sprayed as a fine mist into a chamber containing a liquid coagulant such as n-hexane in which the cellulose derivative is insoluble but in which the coagulant is soluble or miscible. On contact with the coagulant, the mist forms free-flowing porous magnetic microspheric structures. These structures act as containers for the ion-selective or organic-selective sorption agent of choice. Some sorption agents can be incorporated during the manufacture of the structure. 3 figs.
In-Line-Test of Variability and Bit-Error-Rate of HfOx-Based Resistive Memory
Ji, B L; Ye, Q; Gausepohl, S; Deora, S; Veksler, D; Vivekanand, S; Chong, H; Stamper, H; Burroughs, T; Johnson, C; Smalley, M; Bennett, S; Kaushik, V; Piccirillo, J; Rodgers, M; Passaro, M; Liehr, M
2015-01-01T23:59:59.000Z
Spatial and temporal variability of HfOx-based resistive random access memory (RRAM) are investigated for manufacturing and product designs. Manufacturing variability is characterized at different levels including lots, wafers, and chips. Bit-error-rate (BER) is proposed as a holistic parameter for the write cycle resistance statistics. Using the electrical in-line-test cycle data, a method is developed to derive BERs as functions of the design margin, to provide guidance for technology evaluation and product design. The proposed BER calculation can also be used in the off-line bench test and build-in-self-test (BIST) for adaptive error correction and for the other types of random access memories.
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)
2014-06-01T23:59:59.000Z
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the use of probabilistic models into clinical chart checking for improved detection of potential errors in RT plans.
Reducing Collective Quantum State Rotation Errors with Reversible Dephasing
Kevin C. Cox; Matthew A. Norcia; Joshua M. Weiner; Justin G. Bohnet; James K. Thompson
2014-07-16T23:59:59.000Z
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of $2.1 \\times 10^5$ laser cooled and trapped $^{87}$Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-01-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst's understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-05-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst`s understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Meta learning of bounds on the Bayes classifier error
Moon, Kevin R; Hero, Alfred O
2015-01-01T23:59:59.000Z
Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.
Characterization of quantum dynamics using quantum error correction
S. Omkar; R. Srikanth; S. Banerjee
2015-01-27T23:59:59.000Z
Characterizing noisy quantum processes is important to quantum computation and communication (QCC), since quantum systems are generally open. To date, all methods of characterization of quantum dynamics (CQD), typically implemented by quantum process tomography, are \\textit{off-line}, i.e., QCC and CQD are not concurrent, as they require distinct state preparations. Here we introduce a method, "quantum error correction based characterization of dynamics", in which the initial state is any element from the code space of a quantum error correcting code that can protect the state from arbitrary errors acting on the subsystem subjected to the unknown dynamics. The statistics of stabilizer measurements, with possible unitary pre-processing operations, are used to characterize the noise, while the observed syndrome can be used to correct the noisy state. Our method requires at most $2(4^n-1)$ configurations to characterize arbitrary noise acting on $n$ qubits.
Non-Gaussian numerical errors versus mass hierarchy
Y. Meurice; M. B. Oktay
2000-05-12T23:59:59.000Z
We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.
Factorization of correspondence and camera error for unconstrained dense correspondence applications
Knoblauch, D; Hess-Flores, M; Duchaineau, M; Kuester, F
2009-09-29T23:59:59.000Z
A correspondence and camera error analysis for dense correspondence applications such as structure from motion is introduced. This provides error introspection, opening up the possibility of adaptively and progressively applying more expensive correspondence and camera parameter estimation methods to reduce these errors. The presented algorithm evaluates the given correspondences and camera parameters based on an error generated through simple triangulation. This triangulation is based on the given dense, non-epipolar constraint, correspondences and estimated camera parameters. This provides an error map without requiring any information about the perfect solution or making assumptions about the scene. The resulting error is a combination of correspondence and camera parameter errors. An simple, fast low/high pass filter error factorization is introduced, allowing for the separation of correspondence error and camera error. Further analysis of the resulting error maps is applied to allow efficient iterative improvement of correspondences and cameras.
Peak, Derek
Are you getting an error message in UniFi Plus? (suggestion...check the auto-hint line!) In most cases, Unifi Plus does not prominently display error messages; instead, the error message and processing messages Keyboard shortcuts Instructions for accessing other blocks, windows or forms from
Comment on "Optimum Quantum Error Recovery using Semidefinite Programming"
M. Reimpell; R. F. Werner; K. Audenaert
2006-06-07T23:59:59.000Z
In a recent paper ([1]=quant-ph/0606035) it is shown how the optimal recovery operation in an error correction scheme can be considered as a semidefinite program. As a possible future improvement it is noted that still better error correction might be obtained by optimizing the encoding as well. In this note we present the result of such an improvement, specifically for the four-bit correction of an amplitude damping channel considered in [1]. We get a strict improvement for almost all values of the damping parameter. The method (and the computer code) is taken from our earlier study of such correction schemes (quant-ph/0307138).
Error estimates and specification parameters for functional renormalization
Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)
2013-07-15T23:59:59.000Z
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Correctable noise of Quantum Error Correcting Codes under adaptive concatenation
Jesse Fern
2008-02-27T23:59:59.000Z
We examine the transformation of noise under a quantum error correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum fault tolerant thresholds.
Error-prevention scheme with two pairs of qubits
Chu, Shih-I; Yang, Chui-Ping; Han, Siyuan
2002-09-04T23:59:59.000Z
Ei jue ie j&5ue je i& , e iP$0,1% @6#!. The expressions for HS and HSB are as follows: HS5e0~s I z 1s II z !, *Email address: cpyang@floquet.chem.ku.edu †Email address: sichu@ku.edu ‡ Email address: han@ku.eduError-prevention scheme Chui-Ping Yang.... The sche two pairs of qubits and through error-prevention proc through a decoherence-free subspace for collective p pairs; leakage out of the encoding space due to amp addition, how to construct decoherence-free states for n discussed. DOI: 10.1103/Phys...
Laser Phase Errors in Seeded Free Electron Lasers
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-04-17T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
Quaternion Derivatives: The GHR Calculus
Dongpo Xu; Cyrus Jahanchahi; Clive C. Took; Danilo P. Mandic
2014-09-25T23:59:59.000Z
Quaternion derivatives in the mathematical literature are typically defined only for analytic (regular) functions. However, in engineering problems, functions of interest are often real-valued and thus not analytic, such as the standard cost function. The HR calculus is a convenient way to calculate formal derivatives of both analytic and non-analytic functions of quaternion variables, however, both the HR and other functional calculus in quaternion analysis have encountered an essential technical obstacle, that is, the traditional product rule is invalid due to the non- commutativity of the quaternion algebra. To address this issue, a generalized form of the HR derivative is proposed based on a general orthogonal system. The so introduced generalization, called the generalized HR (GHR) calculus, encompasses not just the left- and right-hand versions of quaternion derivative, but also enables solutions to some long standing problems, such as the novel product rule, the chain rule, the mean-valued theorem and Taylor's theorem. At the core of the proposed approach is the quaternion rotation, which can naturally be applied to other functional calculi in non-commutative settings. Examples on using the GHR calculus in adaptive signal processing support the analysis.
DERIVATION OF STOCHASTIC ACCELERATION MODEL CHARACTERISTICS FOR...
Office of Scientific and Technical Information (OSTI)
DERIVATION OF STOCHASTIC ACCELERATION MODEL CHARACTERISTICS FOR SOLAR FLARES FROM RHESSI HARD X-RAY OBSERVATIONS Citation Details In-Document Search Title: DERIVATION OF STOCHASTIC...
Soft Error Modeling and Protection for Sequential Elements Hossein Asadi and Mehdi B. Tahoori
on system-level soft error rate. The number of clock cycles required for an error in a bistable to be propagated to system outputs is used to measure the vulnerability of bistables to soft errors. 1 Introduction, soft errors become the main reliability concern during lifetime operation of digital systems. Soft
Low-Cost Hardening of Image Processing Applications Against Soft Errors Ilia Polian1,2
Polian, Ilia
, and their hardening against soft errors becomes an issue. We propose a methodology to identify soft errors as uncritical based on their impact on the system's functionality. We call a soft error uncritical if its impact are imperceivable for the human user of the system. We focus on soft errors in the motion esti- mation subsystem
Distinguishing congestion and error losses: an ECN/ELN based scheme
Kamakshisundaram, Raguram
2001-01-01T23:59:59.000Z
error rates, like wireless links, packets are lost more due to error than due to congestion. But TCP does not differentiate between error and congestion losses and hence reduces the sending rate for losses due to error also, which unnecessarily reduces...
Error Exponent for Discrete Memoryless Multiple-Access Channels
Anastasopoulos, Achilleas
Error Exponent for Discrete Memoryless Multiple-Access Channels by Ali Nazari A dissertation Bayraktar Associate Professor Jussi Keppo #12;c Ali Nazari 2011 All Rights Reserved #12;To my parents. ii Becky Turanski, Nancy Goings, Michele Feldkamp, Ann Pace, Karen Liska and Beth Lawson for efficiently
Optimal Estimation from Relative Measurements: Error Scaling (Extended Abstract)
Hespanha, João Pedro
"relative" measurement between xu and xv is available: uv = xu - xv + u,v Rk , (u, v) E V × V, (1) whereOptimal Estimation from Relative Measurements: Error Scaling (Extended Abstract) Prabir Barooah Jo~ao P. Hespanha I. ESTIMATION FROM RELATIVE MEASUREMENTS We consider the problem of estimating a number
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications
Polz, Martin
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications Stelios CSAIL, Cambridge, MA, USA Abstract We present Code Phage (CP), a system for automatically transferring. To the best of our knowledge, CP is the first system to automatically transfer code across multiple
Error Bounds from Extra Precise Iterative Refinement James Demmel
Li, Xiaoye Sherry
now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5 Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF-0444486; the DOE Grant Nos. DE-FG03
Control del Error para la Multirresoluci on Quincunx a la
Amat, Sergio
multirresoluci#19;on discreta no lineal de Harten. En los algoritmos de multirresoluci#19;on se transforma una obtiene ^ f L la cual debera de estar cerca de #22; f L . Por lo tanto, los algoritmos no deben de ser inestables. En este estudio, introduciremos algoritmos de control del error y de la estabilidad. Se obtendr
Urban Water Demand with Periodic Error Correction David R. Bell
Griffin, Ronald
them. Econometric estimates of residential demand for water abound (Dalhuisen et al. 2003Urban Water Demand with Periodic Error Correction by David R. Bell and Ronald C. Griffin February, Department of Agricultural Economics, Texas A&M University. #12;Abstract Monthly demand for publicly supplied
Error Control Based Model Reduction for Parameter Optimization of Elliptic
of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material
ADJOINT AND DEFECT ERROR BOUNDING AND CORRECTION FOR FUNCTIONAL ESTIMATES
Pierce, Niles A.
and Michael B. Giles Applied & Computational Mathematics, California Institute of Technology Computing to handle flows with shocks; numerical experiments confirm 4th order error estimates for a pressure integral of shocked quasi-1D Euler flow. Numerical results also demonstrate 4th order accuracy for the drag
RESIDUAL TYPE A POSTERIORI ERROR ESTIMATES FOR ELLIPTIC OBSTACLE PROBLEMS
Nochetto, Ricardo H.
to double obstacle problems are briefly discussed. Key words. a posteriori error estimates, residual Science Foundation under the grant No.19771080 and China National Key Project ``Large Scale Scientific\\Gamma satisfies / Å¸ 0 on @ and K is the convex set of admissible displacements K := fv 2 H 1 0(\\Omega\\Gamma : v
Energy efficiency of error correction for wireless communication
Havinga, Paul J.M.
-control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost
Selected CRC Polynomials Can Correct Errors and Thus Reduce Retransmission
Mache, Jens
sensor networks, minimizing communication is crucial to improve energy consumption and thus lifetime Correction, Reliability, Network Protocol, Low Power Comsumption I. INTRODUCTION Error detection using Cyclic of retransmitting the whole packet - improves energy consumption and thus lifetime of wireless sensor networks
A Spline Algorithm for Modeling Cutting Errors Turning Centers
Gilsinn, David E.
. Bandy Automated Production Technology Division National Institute of Standards and Technology 100 Bureau are made up of features with profiles defined by arcs and lines. An error model for turned parts must take. In the case where there is a requirement of tangency between two features, such as a line tangent to an arc
Time reversal in thermoacoustic tomography - an error estimate
Hristova, Yulia
2008-01-01T23:59:59.000Z
The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming
Waterloo, University of
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming Babak Alipanahi1 automatically picked peaks. IPASS is proposed as a novel integer linear programming (ILP) based assignment assignment method. Although a variety of assignment approaches have been developed, none works well on noisy
Research Article Preschool Speech Error Patterns Predict Articulation
-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological Outcomes in Children With Histories of Speech Sound Disorders Jonathan L. Preston,a,b Margaret Hull disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method
Edinburgh Research Explorer Prevalence and Causes of Prescribing Errors
Hall, Christopher
of Prescribing Errors: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study Cristi´n Ryan1 , Sarah Kingdom, 7 Health Psychology, University of Aberdeen, Aberdeen, United Kingdom, 8 Clinical Pharmacology
Development of an Expert System for Classification of Medical Errors
Kopec, Danny
in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01T23:59:59.000Z
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
Errors-in-variables problems in transient electromagnetic mineral exploration
Braslavsky, Julio H.
Errors-in-variables problems in transient electromagnetic mineral exploration K. Lau, J. H in transient electromagnetic mineral exploration. A specific sub-problem of interest in this area geological surveys, dia- mond drilling, and airborne mineral exploration. Our interest here is with ground
Improving STT-MRAM Density Through Multibit Error Correction
Sapatnekar, Sachin
. Traditional methods enhance robustness at the cost of area/energy by using larger cell sizes to improve the thermal stability of the MTJ cells. This paper employs multibit error correction with DRAM to the read operation) through TX. A key attribute of an MTJ is the notion of thermal stability. Fig. 2
Error Minimization Methods in Biproportional Apportionment Federica Ricca Andrea Scozzari
Serafini, Paolo
as an alternative to the classical axiomatic approach introduced by Balinski and Demange in 1989. We provide and in the statistical literature. A milestone theoretical setting was given by Balinski and Demange in 1989 [5, 6 a class of methods for Biproportional Apportionment characterized by an "error minimization" approach
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Sambridge, Malcolm
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller
Xuan, Dong
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller , Nagarjuna and Computer Engineering The Ohio State University {millerti,teodores}@cse.ohio-state.edu, nagarjun. To deal with these com- peting trends, energy-efficient solutions are needed to deal with reli- ability
Designing Automation to Reduce Operator Errors Nancy G. Leveson
Leveson, Nancy
Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of moderelated problems [SW95]. After studying accidents and incidents in the new, highly automated
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering
Kreinovich, Vladik
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work
Data aware, Low cost Error correction for Wireless Sensor Networks
California at San Diego, University of
Data aware, Low cost Error correction for Wireless Sensor Networks Shoubhik Mukhopadhyay, Debashis challenges in adoption and deployment of wireless networked sensing applications is ensuring reliable sensor of such applications. A wireless sensor network is inherently vulnerable to different sources of unreliability
Joachim Wuttke
2012-09-01T23:59:59.000Z
The C library \\texttt{libkww} provides functions to compute the Kohlrausch-Williams-Watts function, i.e.\\ the Laplace-Fourier transform of the stretched (or compressed) exponential function $\\exp(-t^\\beta)$ for exponents $\\beta$ between 0.1 and 1.9 with sixteen-digits accuracy. Analytic error bounds are derived for the low and high frequency series expansions. For intermediate frequencies the numeric integration is enormously accelerated by using the Ooura-Mori double exponential transformation. The source code is available from the project home page \\url{http://apps.jcns.fz-juelich.de/doku/sc/kww}.
Jewson, S
2007-01-01T23:59:59.000Z
One way to predict hurricane numbers would be to predict sea surface temperature, and then predict hurricane numbers as a function of the predicted sea surface temperature. For certain parametric models for sea surface temperature and the relationship between sea surface temperature and hurricane numbers, closed-form solutions exist for the mean and the variance of the number of predicted hurricanes, and for the standard error on the mean. We derive a number of such expressions.
Binder enhanced refuse derived fuel
Daugherty, Kenneth E. (Lewisville, TX); Venables, Barney J. (Denton, TX); Ohlsson, Oscar O. (Naperville, IL)
1996-01-01T23:59:59.000Z
A refuse derived fuel (RDF) pellet having about 11% or more particulate calcium hydroxide which is utilized in a combustionable mixture. The pellets are used in a particulate fuel bring a mixture of 10% or more, on a heat equivalent basis, of the RDF pellet which contains calcium hydroxide as a binder, with 50% or more, on a heat equivalent basis, of a sulphur containing coal. Combustion of the mixture is effective to produce an effluent gas from the combustion zone having a reduced SO.sub.2 and polycyclic aromatic hydrocarbon content of effluent gas from similar combustion materials not containing the calcium hydroxide.
On the Fourier Transform Approach to Quantum Error Control
Hari Dilip Kumar
2012-08-24T23:59:59.000Z
Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)
2010-08-24T23:59:59.000Z
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.
2013-01-01T23:59:59.000Z
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require (p) analysis time per MPI operation,more »forpprocesses. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Probabilistic growth of large entangled states with low error accumulation
Yuichiro Matsuzaki; Simon C Benjamin; Joseph Fitzsimons
2009-08-03T23:59:59.000Z
The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01T23:59:59.000Z
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
On the efficiency of nondegenerate quantum error correction codes for Pauli channels
Gunnar Bjork; Jonas Almlof; Isabel Sainz
2009-05-19T23:59:59.000Z
We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.
Scaling behavior of discretization errors in renormalization and improvement constants
Bhattacharya, T; Lee, W; Sharpe, S R; Bhattacharya, Tanmoy; Gupta, Rajan; Lee, Weonjong; Sharpe, Stephen R.
2006-01-01T23:59:59.000Z
Non-perturbative results for improvement and renormalization constants needed for on-shell and off-shell O(a) improvement of bilinear operators composed of Wilson fermions are presented. The calculations have been done in the quenched approximation at beta=6.0, 6.2 and 6.4. To quantify residual discretization errors we compare our data with results from other non-perturbative calculations and with one-loop perturbation theory.
Error message recording and reporting in the SLC control system
Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.
1985-04-01T23:59:59.000Z
Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29T23:59:59.000Z
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
Submitted to Math. Comp. ON THE ERROR ESTIMATES FOR THE ...
2002-02-11T23:59:59.000Z
predictor-corrector strategy aiming at uncoupling viscous diffusion and incompressibil- ity effects. .... In practice, the nonlinear terms can be treated either implicitly, semi- .... One derives immediately from the standard PDE theory that. (
Introduction Aliphatic polyesters derived from renewable
Introduction Aliphatic polyesters derived from renewable resources are of increasing interest, there are also opportunities to derive lactones from biomass, which can then be converted to a wide range
The Fourth Partial Derivative In Transport Dynamics
Trinh Khanh Tuoc
2010-01-11T23:59:59.000Z
A new fourth partial derivative is introduced for the study of transport dynamics. It is a Lagrangian partial derivative following the path of diffusion, not the path of convection. Use of this derivative decouples the effect of diffusion and convection and simplifies the analysis of transport processes.
V-109: Google Chrome WebKit Type Confusion Error Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
9: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code V-109: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute...
Recompile if your codes run into MPICH error after the maintenance...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Recompile if your codes run into MPICH errors after the maintenance on 6252014 Recompile if your codes run into MPICH error after the maintenance on 6252014 June 27, 2014 (0...
Design techniques for graph-based error-correcting codes and their applications
Lan, Ching Fu
2006-04-12T23:59:59.000Z
-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...
Simulations of error in quantum adiabatic computations of random 2-SAT instances
Gill, Jay S. (Jay Singh)
2006-01-01T23:59:59.000Z
This thesis presents a series of simulations of quantum computations using the adiabatic algorithm. The goal is to explore the effect of error, using a perturbative approach that models 1-local errors to the Hamiltonian ...
T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service
Broader source: Energy.gov [DOE]
A remote user can cause the backend server to remain in an error state until the retry timeout expires.
McReynolds, W.L. (Bonneville Power Administration, Vancouver, WA (US)); Badley, D.E. (N.W. Power Pool, Coordinating Office, Portland, OR (US))
1991-08-01T23:59:59.000Z
This paper describes an automatic generation control (AGC) system that simultaneously reduces time error and accumulated inadvertent interchange energy in interconnected power system. This method is automatic time error and accumulated inadvertent interchange reduction (AIIR). With this method control areas help correct the system time error when doing so also tends to correct accumulated inadvertent interchange. Thus in one step accumulated inadvertent interchange and system time error are corrected.
Optimum decoding of TCM in the presence of phase errors
Han, Jae Choong
1990-01-01T23:59:59.000Z
discussed. Our approach is to assume that intersymbol interference has been effectively removed by the equalizer while the phase tracking scheme has partially removed the phase jitter, in which case the output of the equalizer will have a slowly varying.... The DAL [I] used the decision at the output ol' the Viterbi decoder to demodulate the local c&arrier. The performance degradation of coded 8-PSK when disturbed by recovered carrier phase error and jitter is investigatecl in i'Gi, in which simulation...
Effects of color coding on keying time and errors
Wooldridge, Brenda Gail
1983-01-01T23:59:59.000Z
were to determine the effects if any oi' color coding upon the error rate and location time of special func- tion keys on a computer keyboard. An ACT-YA CRT keyboard interfaced with a Kromemco microcomputer was used. There were 84 high schoool... to comnunicate with more and more computer-like devices. The most common computer/human interface is the terminal, consisting of a display screen, and keyboard. The format and layout on the display screen of computer-generated information is generally...
The Impact of Soil Sampling Errors on Variable Rate Fertilization
R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink
2004-07-01T23:59:59.000Z
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.
Error-field penetration in reversed magnetic shear configurations
Wang, H. H.; Wang, Z. X.; Wang, X. Q. [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China)] [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, X. G. [School of Physics, Peking University, Beijing 100871 (China)] [School of Physics, Peking University, Beijing 100871 (China)
2013-06-15T23:59:59.000Z
Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.
An error correcting procedure for imperfect supervised, nonparametric classification
Ferrell, Dennis Ray
1973-01-01T23:59:59.000Z
ON INFORMATION THEORY . is active) . I'or simplicity in writing, Pr(B=B. ) will be ab- j breviated by Pr(B. ), and f(x/B=B ) will be abbreviated by j f (x/B. ) . The basic problem is, upon observing x, to determine j which class is active. If complete... to be B , r (x), is r (x) ( L Pr(B /x) i=1 The conditional probability of error can be minimized over j by assigning to a measurement x, the label value B such that minimizes r (x) . The rule which will do this is Bayes rule, b*. The resulting...
Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai
Sahai, Anant
Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai HP Labs, Palo Alto EECS, UC Berkeley ISIT 2008 Chang (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 1 (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 2 / 14 #12;Stabilizing an unstable
A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang
Shen, Kai
A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor- tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap
A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang
Shen, Kai
A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap
Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS
Duckham, Matt
Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS Matt Duckham Department in the handling of uncertainty within GIS, the production of what has been described as an error sensitive GIS of opportunities, but also impediments to the implemen- tation of such an error sensitive GIS. An important barrier
Digication Error Message:"Your username is already in use by another account."
Barrash, Warren
Digication Error Message:"Your username is already in use by another account." You may need you have one). If you receive the error message below, here's how to log into your Digication account. (For example, if the error message appeared when using your employee account, switch to your employee
Repeated quantum error correction on a continuously encoded qubit by real-time feedback
Julia Cramer; Norbert Kalb; M. Adriaan Rol; Bas Hensen; Machiel S. Blok; Matthew Markham; Daniel J. Twitchen; Ronald Hanson; Tim H. Taminiau
2015-08-06T23:59:59.000Z
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits, so that errors can be detected without affecting the encoded state. To be compatible with universal fault-tolerant computations, it is essential that the states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected qubit using a diamond quantum processor. We encode a logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements using an ancilla electron spin, and apply corrections on the encoded state by real-time feedback. The actively error-corrected qubit is robust against errors and multiple rounds of error correction prevent errors from accumulating. Moreover, by correcting phase errors naturally induced by the environment, we demonstrate that encoded quantum superposition states are preserved beyond the dephasing time of the best physical qubit used in the encoding. These results establish a powerful platform for the fundamental investigation of error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Edit: Study -APP Save | Exit | Hide/Show Errors | Print... | Jump To
Biederman, Irving
Edit: Study - APP Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project Guidance Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project IdentificationStarDev/ResourceAdministration/Project/ProjectEditor?Project=com... 1 #12;Edit: Study - APP- Save | Exit | Hide/Show Errors | Print... | Jump To: 02. Study
Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI
Hadjicostis, Christoforos
Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI Dynamic Systems encoded form and allow error detection and correction to be performed through concurrent parity checks (i that allows parity checks to capture the evolution of errors in the system and, based on non-concurrent parity
Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences
Dominici, Francesca
1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure of air pollution and health. Because measurement error may have substantial implications for interpreting
Unconventional fuel: Tire derived fuel
Hope, M.W. [Waste Recovery, Inc., Portland, OR (United States)
1995-09-01T23:59:59.000Z
Material recovery of scrap tires for their fuel value has moved from a pioneering concept in the early 1980`s to a proven and continuous use in the United States` pulp and paper, utility, industrial, and cement industry. Pulp and paper`s use of tire derived fuel (TDF) is currently consuming tires at the rate of 35 million passenger tire equivalents (PTEs) per year. Twenty mills are known to be burning TDF on a continuous basis. The utility industry is currently consuming tires at the rate of 48 million PTEs per year. Thirteen utilities are known to be burning TDF on a continuous basis. The cement industry is currently consuming tires at the rate of 28 million PTEs per year. Twenty two cement plants are known to be burning TDF on a continuous basis. Other industrial boilers are currently consuming tires at the rate of 6.5 million PTEs per year. Four industrial boilers are known to be burning TDF on a continuous basis. In total, 59 facilities are currently burning over 117 million PTEs per year. Although 93% of these facilities were not engineered to burn TDF, it has become clear that TDF has found acceptance as a supplemental fuel when blending with conventional fuels in existing combustion devices designed for normal operating conditions. The issues of TDF as a supplemental fuel and its proper specifications are critical to the successful development of this fuel alternative. This paper will focus primarily on TDF`s use in a boiler type unit.
Aperiodic dynamical decoupling sequences in presence of pulse errors
Zhi-Hui Wang; V. V. Dobrovitski
2011-01-12T23:59:59.000Z
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstrate that by applying the control pulses along different directions, the performance of QDD protocols can be noticeably improved, and explain the reason of such an improvement. Our results can be useful for future implementations of the aperiodic decoupling protocols, and for better understanding of the impact of errors on quantum control of spins.
Sample size in factor analysis: The role of model error
MacCallum, R. C.; Widaman, K. F.; Preacher, Kristopher J.; Hong, Sehee
2001-01-01T23:59:59.000Z
Equation 1: (2) H9018 yy = H9011H9021H9011H11032 + H9008 2 where H9018 yy is the p ? p population covariance matrix for the measured variables and H9021 is the r ? r population correlation matrix for the common factors (assuming factors are standardized... in the population). This is the standard version of the common factor model for a population covariance matrix. Following similar algebraic procedures, we could derive a structure for a sample covariance matrix, C yy . However, in such a derivation we can...
Ginting, Victor
2014-03-15T23:59:59.000Z
it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.
Fossen, Haakon
Errors, 3rd printing ·Page 3, Fig 1.2 has an error in the stratigraphic key: "Tertiary" should "-amplitude" to "-wavelength". ·Page 231, 6th and 3rd last lines of the page: Add "Figure" in front of 19.5a ..." and 3rd line: "three principal axes" (not two). #12;
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25T23:59:59.000Z
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore »true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex
2012-06-21T23:59:59.000Z
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the finite gap width of the bunching module on compression are analyzed analytically.
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22T23:59:59.000Z
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Statistical Error analysis of Nucleon-Nucleon phenomenological potentials
R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola
2014-06-10T23:59:59.000Z
Nucleon-Nucleon potentials are commonplace in nuclear physics and are determined from a finite number of experimental data with limited precision sampling the scattering process. We study the statistical assumptions implicit in the standard least squares fitting procedure and apply, along with more conventional tests, a tail sensitive quantile-quantile test as a simple and confident tool to verify the normality of residuals. We show that the fulfilment of normality tests is linked to a judicious and consistent selection of a nucleon-nucleon database. These considerations prove crucial to a proper statistical error analysis and uncertainty propagation. We illustrate these issues by analyzing about 8000 proton-proton and neutron-proton scattering published data. This enables the construction of potentials meeting all statistical requirements necessary for statistical uncertainty estimates in nuclear structure calculations.
Statistical evaluation of design-error related accidents
Ott, K.O.; Marchaterre, J.F.
1980-01-01T23:59:59.000Z
In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy.
Calibration by Optimization Without Using Derivatives
Markus Lazar
2015-03-06T23:59:59.000Z
Mar 6, 2015 ... Abstract: Applications in engineering frequently require the ... to upper and lower bounds without relying on the knowledge of the derivative of f .
An extension of the classical derivative
Diego Dominici
2006-03-24T23:59:59.000Z
We extend the usual definition of the derivative in a way that Calculus I students can easily comprehend and which allows calculations at branch points.
Anisotropic higher derivative gravity and inflationary universe
W. F. Kao
2006-05-21T23:59:59.000Z
Stability analysis of the Kantowski-Sachs type universe in pure higher derivative gravity theory is studied in details. The non-redundant generalized Friedmann equation of the system is derived by introducing a reduced one dimensional generalized KS type action. This method greatly reduces the labor in deriving field equations of any complicate models. Existence and stability of inflationary solution in the presence of higher derivative terms are also studied in details. Implications to the choice of physical theories are discussed in details in this paper.
Georg A. Gottwald; Lewis Mitchell; Sebastian Reich
2011-08-30T23:59:59.000Z
We consider the problem of an ensemble Kalman filter when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables we derive a variance limiting Kalman filter (VLKF) in a variational setting. We analyze the variance limiting Kalman filter for a simple linear toy model and determine its range of optimal performance. We explore the variance limiting Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information of the variance of some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure.
Fabio L. Pedrocchi; N. E. Bonesteel; David P. DiVincenzo
2015-07-03T23:59:59.000Z
The Majorana code is an example of a stabilizer code where the quantum information is stored in a system supporting well-separated Majorana Bound States (MBSs). We focus on one-dimensional realizations of the Majorana code, as well as networks of such structures, and investigate their lifetime when coupled to a parity-preserving thermal environment. We apply the Davies prescription, a standard method that describes the basic aspects of a thermal environment, and derive a master equation in the Born-Markov limit. We first focus on a single wire with immobile MBSs and perform error correction to annihilate thermal excitations. In the high-temperature limit, we show both analytically and numerically that the lifetime of the Majorana qubit grows logarithmically with the size of the wire. We then study a trijunction with four MBSs when braiding is executed. We study the occurrence of dangerous error processes that prevent the lifetime of the Majorana code from growing with the size of the trijunction. The origin of the dangerous processes is the braiding itself, which separates pairs of excitations and renders the noise nonlocal; these processes arise from the basic constraints of moving MBSs in 1D structures. We confirm our predictions with Monte Carlo simulations in the low-temperature regime, i.e. the regime of practical relevance. Our results put a restriction on the degree of self-correction of this particular 1D topological quantum computing architecture.
A multi-period equilibrium pricing model of weather derivatives
Lee, Yongheon; Oren, Shmuel S.
2010-01-01T23:59:59.000Z
Y. : Valuation and hedging of weather derivatives on monthlyJ. Risk 31. Yoo, S. : Weather derivatives and seasonaleffects and valuation of weather derivatives. Financ. Rev.
A Multi-period Equilibrium Pricing Model of Weather Derivatives
Lee, Yongheon; Oren, Shmuel S.
2008-01-01T23:59:59.000Z
2002). On modelling and pricing weather derivatives. Applied2003). Arbitrage-fee pricing of weather derivatives based onfects and valuation of weather derivatives. The Financial
Integrating weather derivatives for managing risks
Bilski, B. [WeatherWise USA LLC, Pittsburgh, PA (United States)
1999-11-01T23:59:59.000Z
As deregulation and customer choice loom on the horizon, many energy utilities and other energy suppliers are scrambling to find new services that add value for consumers. Many are also seeking opportunities for increasing efficiency to ensure that costs remain competitive. Integrating weather derivatives with marketing programs and financial management can produce attractive new services and increase efficiency. Weather derivatives can be used to create innovative consumer services, such as a guaranteed annual energy bill which is unaffected by weather and energy price changes. They can also be used to protect the earnings of energy suppliers from one of their most significant financial risks, unpredictable weather. There are three basic types of weather derivatives available today. Option or insurance based derivatives (options), swaps or hedge based derivatives (swaps) and packages where other services are combined with one or both of the above.
Contagious error sources would need time travel to prevent quantum computation
Gil Kalai; Greg Kuperberg
2015-05-07T23:59:59.000Z
We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)
1993-01-01T23:59:59.000Z
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Scher, Aaron David
2005-08-29T23:59:59.000Z
................................................................................ 7 3 ADS optimization screenshot.................................................................... 11 4 Maximum (a) Zin, (b) |S21 magnitude|, and (c) |S21 phase| error factors. Z 0 = 50 ? , L = 200 um, substrate thickness = 100 um..., and VSWR = 2.... 12 5 Maximum (a) Zin, (b) |S21 magnitude|, and (c) |S21 phase| error factors. Z0=25 ? , L = 200 um, substrate thickness = 100 um, and VSWR = 2. .... 13 6 Maximum Zin error factors for (a) L = 100 um and (b) L = 150 um...
Kaeli, David R.
A Field Analysis of System-level Effects of Soft Errors Occurring in Microprocessors used, will generate sufficient charge to cause a soft error. In the absence of error correction schemes, the system rates for unprotected systems [8]. Soft errors are emerging as a significant obstacle to increasing
Kaeli, David R.
A Field Failure Analysis of Microprocessors used in Information Systems Abstract Soft errors due from error logs and error traces of the microprocessors collected from systems in the field. Soft focus on soft error rate (SER) estimation of microprocessors used in information systems by analyzing
The Importance of Run-time Error Detection Glenn R. Luecke 1
Luecke, Glenn R.
Iowa State University's High Performance Computing Group, Iowa State University, Ames, Iowa 50011, USA State University's High Performance Computing Group for evaluating run-time error detection capabilities
Accounting for model error due to unresolved scales within ensemble Kalman filtering
Lewis Mitchell; Alberto Carrassi
2014-09-02T23:59:59.000Z
We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through the proposed model error treatments, and that both methods require far less parameter tuning than the standard approach. Furthermore, the proposed approach is simple to implement within a pre-existing ensemble based scheme. The general implications for the use of the proposed approach in the framework of square-root filters such as the ETKF are also discussed.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2008-06-24T23:59:59.000Z
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Goal-oriendted local a posteriori error estimator for H(div)
2011-12-15T23:59:59.000Z
Dec 15, 2011 ... error estimator measures the pollution effect from the outside region of D and provides a basis for local refinement in order to efficiently ...
V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service...
Broader source: Energy.gov (indexed) [DOE]
the target resolver to crash IMPACT: Triggering this defect will cause the affected server to exit with an error, denying service to recursive DNS clients that use that...
Ulidowski, Irek
Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip. Wire markers are typically attached to the polyethylene acetabular component of the prosthesis so
Choose and choose again: appearance-reality errors, pragmatics and logical ability
Deák, Gedeon O; Enright, Brian
2006-01-01T23:59:59.000Z
Development, 62, 753–766. Speer, J.R. (1984). Two practicalolder still make errors (e.g. Speer, 1984), some preschool
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-01-01T23:59:59.000Z
Development, 62, 753-766. Speer, J. R. (1984). Two practicalolder still make errors (e.g. , Speer, 1984), some preschool
Trapped Ion Quantum Error Correcting Protocols Using Only Global Operations
Joseph F. Goodwin; Benjamin J. Brown; Graham Stutter; Howard Dale; Richard C. Thompson; Terry Rudolph
2014-07-07T23:59:59.000Z
Quantum error-correcting codes are many-body entangled states that are prepared and measured using complex sequences of entangling operations. Each element of such an entangling sequence introduces noise to delicate quantum information during the encoding or reading out of the code. It is important therefore to find efficient entangling protocols to avoid the loss of information. Here we propose an experiment that uses only global entangling operations to encode an arbitrary logical qubit to either the five-qubit repetition code or the five-qubit code, with a six-ion Coulomb crystal architecture in a Penning trap. We show that the use of global operations enables us to prepare and read out these codes using only six and ten global entangling pulses, respectively. The proposed experiment also allows the acquisition of syndrome information during readout. We provide a noise analysis for the presented protocols, estimating that we can achieve a six-fold improvement in coherence time with noise as high as $\\sim 1\\%$ on each entangling operation.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15T23:59:59.000Z
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Aperiodic dynamical decoupling sequences in presence of pulse errors
Wang, Zhi-Hui
2011-01-01T23:59:59.000Z
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstra...
SCM Forcing Data Derived from NWP Analyses
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Jakob, Christian
Forcing data, suitable for use with single column models (SCMs) and cloud resolving models (CRMs), have been derived from NWP analyses for the ARM (Atmospheric Radiation Measurement) Tropical Western Pacific (TWP) sites of Manus Island and Nauru.
Direct synthesis of pyridine and pyrimidine derivatives
Hill, Matthew D. (Matthew Dennis)
2008-01-01T23:59:59.000Z
I. Synthesis of Substituted Pyridine Derivatives via the Ruthenium-Catalyzed Cycloisomerization of 3-Azadienynes. The two-step conversion of various N-vinyl and N-aryl amides to the corresponding substituted pyridines and ...
Tax Credit for Forest Derived Biomass
Broader source: Energy.gov [DOE]
Forest-derived biomass includes tree tops, limbs, needles, leaves, and other woody debris leftover from activities such as timber harvesting, forest thinning, fire suppression, or forest health m...
Deriving Mathisson - Papapetrou equations from relativistic pseudomechanics
R. R. Lompay
2005-03-12T23:59:59.000Z
It is shown that the equations of motion of a test point particle with spin in a given gravitational field, so called Mathisson - Papapetrou equations, can be derived from Euler - Lagrange equations of the relativistic pseudomechanics -- relativistic mechanics, which side by side uses the conventional (commuting) and Grassmannian (anticommuting) variables. In this approach the known difficulties of the Mathisson - Papapetrou equations, namely, the problem of the choice of supplementary conditions and the problem of higher derivatives are not appear.
IEEE SENSORS JOURNAL, VOL. 3, NO. 5, OCTOBER 2003 595 Active Structural Error Suppression in MEMS
Chen, Zhongping
-run perturbations are presented. Index Terms--Error suppression, microelectromechanical sys- tems (MEMS), rate integrating gyroscopes, smart MEMS. I. INTRODUCTION AS MICROELECTROMECHANICAL systems (MEMS) inertial sensorsIEEE SENSORS JOURNAL, VOL. 3, NO. 5, OCTOBER 2003 595 Active Structural Error Suppression in MEMS
A Case for Soft Error Detection and Correction in Computational Chemistry
van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.
2013-09-10T23:59:59.000Z
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
The Invariance of Score Tests to Measurement Error By CHI-LUN CHENG
Huang, Su-Yun
for a Box-Cox power transformation. Under speci c constraints, we show that the score tests for measurement these estab- lished results when the true model is subject to measurement errors. It is known that ignoring variable xi is the true value i plus some random measurement error i: xi = i + i (i = 1 n) (1
Fessler, Jeffrey A.
-ray computed tomography. The effects of the quantization error in forward-projection, back computed tomography (CT) have been proposed to improve image quality and reduce dose [1]. These methodsPERTURBATION-BASED ERROR ANALYSIS OF ITERATIVE IMAGE RECONSTRUCTION ALGORITHM FOR X-RAY COMPUTED
Convergence Analysis of the LMS Algorithm with a General Error Nonlinearity and an IID Input
Al-Naffouri, Tareq Y.
Convergence Analysis of the LMS Algorithm with a General Error Nonlinearity and an IID Input Tareq. of Electrical Eng. Abstract The class of least mean square (LMS) algorithms employing a general error are entirely consis- tent with those of the LMS algorithm and several of its variants. The results also
Al-Naffouri, Tareq Y.
The Optimum Error Nonlinearity in LMS Adaptation with an Independent and Identically Distributed, CA 94305 Dhahran 31261 USA Saudi Arabia Abstract The class of LMS algorithms employing a gen- eral view of error nonlinearities in LMS adaptation. In particular, it subsumes two recently developed
Minimum Bit Error Probability of Large Randomly Spread MCCDMA Systems in
MÃ¼ller, Ralf R.
Minimum Bit Error Probability of Large Randomly Spread MCÂCDMA Systems in Multipath Rayleigh Fading, to calculate the bit error probaÂ bility in the large system limit for randomly assigned spreading sequences detecÂ tion with is accurate if the number of users and the spreading factor are large. His calculations
Minimum Bit Error Probability of Large Randomly Spread MC-CDMA Systems in
MÃ¼ller, Ralf R.
Minimum Bit Error Probability of Large Randomly Spread MC-CDMA Systems in Multipath Rayleigh Fading, to calculate the bit error proba- bility in the large system limit for randomly assigned spreading sequences detec- tion with is accurate if the number of users and the spreading factor are large. His calculations
Using system simulation to model the impact of human error in a maritime system
van Dorp, Johan René
the modeling of human error related accident event sequences in a risk assessment of maritime oil framwork was developed for the Prince William Sound Risk Assessment based on interviews with maritime William Sound; Human error; Maritime accidents; Expert judgement; Risk assessment; Risk management 1
TYPOGRAPHICAL AND ORTHOGRAPHICAL SPELLING ERROR Kyongho Min*, William H. Wilson*, Yoo-Jin Moon
Wilson, Bill
-Jin Moon *School of Computer Science and Engineering The University of New South Wales Sydney NSW 2052 of spelling errors such as typographical (Damerau, 1964; Pollock and Zamora, 1983), orthographical (Sterling), and orthographical errors in spontaneous writings of children (Sterling, 1983; Mitton, 1987). 1.2. Approaches
Approximate logic circuits for low overhead, non-intrusive concurrent error detection
Mohanram, Kartik
Approximate logic circuits for low overhead, non-intrusive concurrent error detection Mihir R for the synthesis of approximate logic circuits. A low overhead, non-intrusive solution for concurrent error as proposed in this paper. A low overhead, non-intrusive solution for CED based on ap- proximate
Drift-magnetohydrodynamical model of error-field penetration in tokamak plasmas
Fitzpatrick, Richard
Drift-magnetohydrodynamical model of error-field penetration in tokamak plasmas A. Cole and R published magnetohydrodynamical MHD model of error-field penetration in tokamak plasmas is extended to take in ohmic tokamak plasmas. © 2006 American Institute of Physics. DOI: 10.1063/1.2178167 I. INTRODUCTION
Observability-aware Directed Test Generation for Soft Errors and Crosstalk Faults
Mishra, Prabhat
. In modern System- on-Chip (SoC) design methodology, it is found that regions where errors are detectedObservability-aware Directed Test Generation for Soft Errors and Crosstalk Faults Kanad Basu Syst emerged as an important component of any chip design methodology to detect both functional and electrical
Presenting JECA: A Java Error Correcting Algorithm for the Java Intelligent Tutoring System
Franek, Frantisek
Presenting JECA: A Java Error Correcting Algorithm for the Java Intelligent Tutoring System Edward context involving small Java programs. Furthermore, this paper presents JECA (Java Error Correction is to provide a foundation for the Java Intelligent Tutoring System (JITS) currently being field-tested. Key
Impact of Turbulence Closures and Numerical Errors for the Optimization of Flow Control Devices
Paris-Sud XI, Université de
Impact of Turbulence Closures and Numerical Errors for the Optimization of Flow Control Devices J the use of a Kriging-based global optimization method to determine optimal control parameters conduct an optimization process and measure the impact of numerical and modeling errors on the optimal
ERROR BOUNDS FOR MONOTONE APPROXIMATION SCHEMES FOR HAMILTON-JACOBI-BELLMAN
ERROR BOUNDS FOR MONOTONE APPROXIMATION SCHEMES FOR HAMILTON-JACOBI-BELLMAN EQUATIONS GUY BARLES AND ESPEN R. JAKOBSEN Abstract. We obtain error bounds for monotone approximation schemes of Hamilton-Jacobi, (almost) smooth supersolutions for the Hamilton-Jacobi-Bellman equation. 1. Introduction This paper
AN ADAPTIVE METHOD WITH RIGOROUS ERROR CONTROL FOR THE HAMILTON-JACOBI EQUATIONS.
AN ADAPTIVE METHOD WITH RIGOROUS ERROR CONTROL FOR THE HAMILTON-JACOBI EQUATIONS. PART II: THE TWO adaptive method with rigorous error control for the Hamilton-Jacobi equations. Part II: The two and study an adaptive method for finding approximations to the viscosity solution of Hamilton-Jacobi
Object calculus and the object-oriented analysis and design of an error-sensitive GIS
Duckham, Matt
Object calculus and the object-oriented analysis and design of an error-sensitive GIS MATT DUCKHAM of an error-sensitive GIS Abstract. The use of object-oriented analysis and design (OOAD) in GIS research of the key contemporary issues in GIS. This paper examines the application of one particular OO formalism
Static Detection of API Error-Handling Bugs via Mining Source Code
Young, R. Michael
Static Detection of API Error-Handling Bugs via Mining Source Code Mithun Acharya and Tao Xie error specifi- cations automatically from software package repositories, without requiring any user inter-procedurally scattered and not always correctly coded by the programmers, manually inferring
Measurement and Analysis of the Error Characteristics of an In-Building Wireless Network
Steenkiste, Peter
on fiber or electrical connections have excellent error characteris- tics but that wireless networksMeasurement and Analysis of the Error Characteristics of an In-Building Wireless Network David fdavide,prsg@cs.cmu.edu Abstract There is general belief that networks based on wireless technolo- gies
Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors
Paris-Sud XI, Université de
Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors Fabienne Comte, Tabea of nonparametric density estimation in the pile-up model. Adaptive nonparametric estimators are proposed for the pile-up model in its simple form as well as in the case of additional measurement errors. Furthermore
State preservation by repetitive error detection in a superconducting quantum circuit J. Kelly,1,
Martinis, John M.
State preservation by repetitive error detection in a superconducting quantum circuit J. Kelly,1 , and superconducting circuits1113 have demonstrated multi-qubit states that are first-order toler- ant to one type of error. Recently, experiments with ion traps and superconducting circuits have shown the simultaneous de
Integrated Control-Path Design and Error Recovery in the Synthesis of Digital
Chakrabarty, Krishnendu
11 Integrated Control-Path Design and Error Recovery in the Synthesis of Digital Microfluidic Lab that incorporates control paths and an error- recovery mechanism in the design of a digital microfluidic lab, compared to a baseline chip design, the biochip with a control path can reduce the completion time by 30
Maintaining Standards: Differences between the Standard Deviation and Standard Error, and
California at Santa Cruz, University of
Maintaining Standards: Differences between the Standard Deviation and Standard Error, and When to Use Each David L Streiner, PhD1 Many people confuse the standard deviation (SD) and the standard error of the mean (SE) and are unsure which, if either, to use in presenting data in graphical or tabular form
A Non-Stationary Errors-in-Variables Method with Application to Mineral Exploration
Braslavsky, Julio H.
A Non-Stationary Errors-in-Variables Method with Application to Mineral Exploration K. Lau 1 J. H-cancellation in transient electromagnetic mineral exploration. Alternative methods for noise cancellation in these systems for this class of systems is proposed and applied to a problem arising in mineral exploration. An errors
Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes
Kanungo, Aparna
2005-11-01T23:59:59.000Z
Quantum error correcting codes have been introduced to encode the data bits in extra redundant bits in order to accommodate errors and correct them. However, due to the delicate nature of the quantum states or faulty gate operations, there is a...
A System for 3D Error Visualization and Assessment of Digital Elevation Models
Gousie, Michael B.
A System for 3D Error Visualization and Assessment of Digital Elevation Models Michael B. Gousie that displays a DEM and possible errors in 3D, along with its associated contour or sparse data and detail. The cutting tool is semi-transparent so that the profile is seen in the context of the 3D surface
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing Systems
Pedram, Massoud
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing has drastically increased their susceptibility to soft errors. At the grand scale of cloud computing outputs or system crash. At the grand scale of cloud computing, this problem can only worsen [2, 3, 4, 5
Embedded packet video transmission over wireless channels using power control and forward error
Granelli, Fabrizio
for implementing packet prioritization based on a non-uniform allocation of the available transmission energy high percentage of transmission errors in the wireless medium and the limited energy of portable energy distribution is jointly employed with error correction schemes in order to achieve optimal non
Database Error Trapping and Prediction Mike West & Robert L. Winkler \\Lambda
West, Mike
, such as electronic components or systems, or components of computer software systems, that are subject to regimes and reliability control being of particular note. Keywords: ERROR DETECTION, ERROR RATES, DATA QUALITY, DATA MAN. Exam ples in industrial quality and reliability control may concern manufactured items
ASC Report No. 45/2012 A Numerical Study of Averaging Error
Melenk, Jens Markus
polynomials of the same polynomial degree as the finite element solution leads to reliability and efficiency], is a widely used method for gauging errors in finite element methods and steering adaptive mesh refinements and M. Tutz A review of stability and error theory for collocation methods applied to linear boundary
Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors
Paris-Sud XI, Université de
Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors Adel Olabi.damak@geomnia.eu Abstract--The use of industrial robots in many fields of industry like prototyping, pre-machining and end errors. Identification methods are presented with experimental validation on a 6 axes industrial robot
Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow
Shadday, Martin A., Jr.
1997-03-20T23:59:59.000Z
The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated.
Paris-Sud XI, Université de
Network Code Design from Unequal Error Protection Coding: Channel-Aware Receiver Design.iezzi, fabio.graziosi}@univaq.it Abstract-- In this paper, we propose Unequal Error Protection (UEP) coding theory as a viable and flexible method for the design of network codes for multisource multirelay
Almasi, Gheorghe (Ardsley, NY) [Ardsley, NY; Blumrich, Matthias Augustin (Ridgefield, CT) [Ridgefield, CT; Chen, Dong (Croton-On-Hudson, NY) [Croton-On-Hudson, NY; Coteus, Paul (Yorktown, NY) [Yorktown, NY; Gara, Alan (Mount Kisco, NY) [Mount Kisco, NY; Giampapa, Mark E. (Irvington, NY) [Irvington, NY; Heidelberger, Philip (Cortlandt Manor, NY) [Cortlandt Manor, NY; Hoenicke, Dirk I. (Ossining, NY) [Ossining, NY; Singh, Sarabjeet (Mississauga, CA) [Mississauga, CA; Steinmacher-Burow, Burkhard D. (Wernau, DE) [Wernau, DE; Takken, Todd (Brewster, NY) [Brewster, NY; Vranas, Pavlos (Bedford Hills, NY) [Bedford Hills, NY
2008-06-03T23:59:59.000Z
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.
High ethanol producing derivatives of Thermoanaerobacter ethanolicus
Ljungdahl, Lars G. (Athens, GA); Carriera, Laura H. (Athens, GA)
1983-01-01T23:59:59.000Z
Derivatives of the newly discovered microorganism Thermoanaerobacter ethanolicus which under anaerobic and thermophilic conditions continuously ferment substrates such as starch, cellobiose, glucose, xylose and other sugars to produce recoverable amounts of ethanol solving the problem of fermentations yielding low concentrations of ethanol using the parent strain of the microorganism Thermoanaerobacter ethanolicus are disclosed. These new derivatives are ethanol tolerant up to 10% (v/v) ethanol during fermentation. The process includes the use of an aqueous fermentation medium, containing the substrate at a substrate concentration greater than 1% (w/v).
High ethanol producing derivatives of Thermoanaerobacter ethanolicus
Ljungdahl, L.G.; Carriera, L.H.
1983-05-24T23:59:59.000Z
Derivatives of the newly discovered microorganism Thermoanaerobacter ethanolicus which under anaerobic and thermophilic conditions continuously ferment substrates such as starch, cellobiose, glucose, xylose and other sugars to produce recoverable amounts of ethanol solving the problem of fermentations yielding low concentrations of ethanol using the parent strain of the microorganism Thermoanaerobacter ethanolicus are disclosed. These new derivatives are ethanol tolerant up to 10% (v/v) ethanol during fermentation. The process includes the use of an aqueous fermentation medium, containing the substrate at a substrate concentration greater than 1% (w/v).
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)
2012-07-10T23:59:59.000Z
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
A self-checking fiber optic dosimeter for monitoring common errors in brachytherapy applications
Yin, Y.; Lambert, J.; Yang, S.; McKenzie, D. R.; Jackson, M.; Suchowerska, N. [Physics School, University of Sydney, New South Wales 2006 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia) and Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia); Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia) and Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia)
2009-07-15T23:59:59.000Z
Scintillation dosimetry with optical fiber readout [fiber optic dosimetry (FOD)] requires accurate measurement of light intensity. It is therefore vulnerable to loss of calibration if any changes occur in the efficiency of the optical pathway between the scintillator and the light detector. The authors show in this article that common types of errors that arise during clinical use for brachytherapy applications can be quantified using a light emitting diode to stimulate the scintillator, the so-called LED-FOD method, in an integrated and easy-to-use control unit that incorporates a compact peripheral component interconnect extension for instrumentation. Common sources of error include bending and mechanical compression of the fiber optic components and changes in the temperature of the scintillator. The authors show that the method can detect all the common errors studied in this work and that different types of errors can result in different correlations between the LED stimulated signal and the brachytherapy source signal. For a single-type error the LED-FOD can be used easily for system diagnosis and validation with the possibility to correct the dosimeter reading if the correlation between the LED stimulated signal and the brachytherapy source signal can be defined. For more complex errors, resulting from two or more errors occurring simultaneously, the LED-FOD method can also allow the clinician to make a judgment on the reliability of the dosimeter reading. This self-checking method can enhance the clinical robustness of the FOD for achieving accurate dose control.
Evans, Suzanne B., E-mail: Suzannne.evans@yale.edu [Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut (United States); Yu, James B. [Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut (United States)] [Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut (United States); Chagpar, Anees [Department of Surgery, Yale University School of Medicine, New Haven, Connecticut (United States)] [Department of Surgery, Yale University School of Medicine, New Haven, Connecticut (United States)
2012-10-01T23:59:59.000Z
Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses: 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.
Performance and Error Analysis of Knill's Postselection Scheme in a Two-Dimensional Architecture
Ching-Yi Lai; Gerardo Paz; Martin Suchara; Todd A. Brun
2013-05-31T23:59:59.000Z
Knill demonstrated a fault-tolerant quantum computation scheme based on concatenated error-detecting codes and postselection with a simulated error threshold of 3% over the depolarizing channel. %We design a two-dimensional architecture for fault-tolerant quantum computation based on Knill's postselection scheme. We show how to use Knill's postselection scheme in a practical two-dimensional quantum architecture that we designed with the goal to optimize the error correction properties, while satisfying important architectural constraints. In our 2D architecture, one logical qubit is embedded in a tile consisting of $5\\times 5$ physical qubits. The movement of these qubits is modeled as noisy SWAP gates and the only physical operations that are allowed are local one- and two-qubit gates. We evaluate the practical properties of our design, such as its error threshold, and compare it to the concatenated Bacon-Shor code and the concatenated Steane code. Assuming that all gates have the same error rates, we obtain a threshold of $3.06\\times 10^{-4}$ in a local adversarial stochastic noise model, which is the highest known error threshold for concatenated codes in 2D. We also present a Monte Carlo simulation of the 2D architecture with depolarizing noise and we calculate a pseudo-threshold of about 0.1%. With memory error rates one-tenth of the worst gate error rates, the threshold for the adversarial noise model, and the pseudo-threshold over depolarizing noise, are $4.06\\times 10^{-4}$ and 0.2%, respectively. In a hypothetical technology where memory error rates are negligible, these thresholds can be further increased by shrinking the tiles into a $4\\times 4$ layout.
Deriving Particle Distributions from In-Line Fraunhofer Holographic Data
C.A. Ciarcia; D.E. Johnson; D.S. Sorenson; R.H. Frederickson, A.D. Delanoy; R.M. Malone; T.W. Tunnel
1997-08-01T23:59:59.000Z
Holographic data are acquired during hydrodynamic experiments at the Pegasus Pulsed Power Facility at the Los Alamos National Laboratory. These experiments produce a fine spray of fast-moving particles. Snapshots of the spray are captured using in-line Fraunhofer holographic techniques. Roughly one cubic centimeter is recorded by the hologram. Minimum detectable particle size in the data extends down to 2 microns. In a holography reconstruction system, a laser illuminates the hologram as it rests in a three-axis actuator, recreating the snapshot of the experiment. A computer guides the actuators through an orderly sequence programmed by the user. At selected intervals, slices of this volume are captured and digitized with a CCD camera. Intermittent on-line processing of the image data and computer control of the camera functions optimizes statistics of the acquired image data for off-line processing. Tens of thousands of individual data frames (30 to 40 gigabytes of data) are required to recreate a digital representation of the snapshot. Throughput of the reduction system is 550 megabytes per hour (MB/hr). Objects and associated features from the data are subsequently extracted during off-line processing. Discrimination and correlation tests reject noise, eliminate multiple counting of particles, and build an error model to estimate performance. Objects surviving these tests are classified as particles. The particle distributions are derived from the data base formed by these particles, their locations and features. Throughput of the off-line processing exceeds 500 MB/hr. This paper describes the reduction system, outlines the off-line processing procedure, summarizes the discrimination and correlation tests, and reports numerical results for a sample data set.
Biofuels and bio-products derived from
Ginzel, Matthew
NEED Biofuels and bio- products derived from lignocellulosic biomass (plant materials) are part improve the energy and carbon efficiencies of biofuels production from a barrel of biomass using chemical and thermal catalytic mechanisms. The Center for Direct Catalytic Conversion of Biomass to Biofuels IMPACT
Wind information derived from hot air
Haak, Hein
Wind information derived from hot air balloon flights for use in short term wind forecasts E Introduction/Motivation Hot air balloons as wind measuring device Setup of nested HIRLAM models Results · Three, The Nertherlands #12;Hot air balloon ·Displacement/time unit = wind speed ·Vertical resolution 30m ·Inertia (500 kg
Higher Derivative D-brane Couplings
Guo, Guangyu
2012-10-19T23:59:59.000Z
supersymmetry. In the third part, we obtain the higher derivative D-brane action by using both linearized T-duality and string disc amplitude computation. We evaluate disc amplitude of one R-R field C^(p-3) and two NS-NS fields in the presence of a single Dp...
Background and Motivation Biomass derived syngas contains
Das, Suman
Background and Motivation · Biomass derived syngas contains: CO, H2, small hydrocarbons, H2S shown to be effective for syngas conditioning 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0 1 2 3 Co2+(molm-2
Derivation of a poroelastic flexural shell model
Mikelic, Andro
2015-01-01T23:59:59.000Z
In this paper we investigate the limit behavior of the solution to quasi-static Biot's equations in thin poroelastic flexural shells as the thickness of the shell tends to zero and extend the results obtained for the poroelastic plate by Marciniak-Czochra and Mikeli\\'c. We choose Terzaghi's time corresponding to the shell thickness and obtain the strong convergence of the three-dimensional solid displacement, fluid pressure and total poroelastic stress to the solution of the new class of shell equations. The derived bending equation is coupled with the pressure equation and it contains the bending moment due to the variation in pore pressure across the shell thickness. The effective pressure equation is parabolic only in the normal direction. As additional terms it contains the time derivative of the middle-surface flexural strain. Derivation of the model presents an extension of the results on the derivation of classical linear elastic shells by Ciarlet and collaborators to the poroelastic shells case. The n...
Derivation of a Stochastic Neutron Transport Equation
Edward J. Allen
2010-04-14T23:59:59.000Z
Stochastic difference equations and a stochastic partial differential equation (SPDE) are simultaneously derived for the time-dependent neutron angular density in a general three-dimensional medium where the neutron angular density is a function of position, direction, energy, and time. Special cases of the equations are given such as transport in one-dimensional plane geometry with isotropic scattering and transport in a homogeneous medium. The stochastic equations are derived from basic principles, i.e., from the changes that occur in a small time interval. Stochastic difference equations of the neutron angular density are constructed, taking into account the inherent randomness in scatters, absorptions, and source neutrons. As the time interval decreases, the stochastic difference equations lead to a system of Ito stochastic differential equations (SDEs). As the energy, direction, and position intervals decrease, an SPDE is derived for the neutron angular density. Comparisons between numerical solutions of the stochastic difference equations and independently formulated Monte Carlo calculations support the accuracy of the derivations.
Deriving Security Requirements from Crosscutting Threat Descriptions
Haley, Charles B.
Deriving Security Requirements from Crosscutting Threat Descriptions Charles B. Haley, Robin C representing threats as crosscutting concerns aids in determining the effect of security requirements on the functional requirements. Assets (objects that have value in a system) are first enumerated, and then threats
Isatin Derivatives as Inhibitors of Microtubule Assembly
Beckman, Karen
2008-09-04T23:59:59.000Z
This thesis describes the rationale, design, and syntheses of derivatives of isatin (1-H-indole-2,3-dione). Isatin was identified, during a high throughput screen of 10,000 compounds, as a potential scaffold for microtubule-destabilizing agents...
Constraining Higher Derivative Supergravity with Scattering Amplitudes
Yifan Wang; Xi Yin
2015-03-05T23:59:59.000Z
We study supersymmetry constraints on higher derivative deformations of type IIB supergravity by consideration of superamplitudes. Combining constraints of on-shell supervertices and basic results from string perturbation theory, we give a simple argument for the non-renormalization theorem of Green and Sethi, and some of its generalizations.
High speed point derivative microseismic detector
Uhl, James Eugene (Albuquerque, NM); Warpinski, Norman Raymond (Albuquerque, NM); Whetten, Ernest Blayne (Albuquerque, NM)
1998-01-01T23:59:59.000Z
A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event. The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves.
High speed point derivative microseismic detector
Uhl, J.E.; Warpinski, N.R.; Whetten, E.B.
1998-06-30T23:59:59.000Z
A high speed microseismic event detector constructed in accordance with the present invention uses a point derivative comb to quickly and accurately detect microseismic events. Compressional and shear waves impinging upon microseismic receiver stations disposed to collect waves are converted into digital data and analyzed using a point derivative comb including assurance of quiet periods prior to declaration of microseismic events. If a sufficient number of quiet periods have passed, the square of a two point derivative of the incoming digital signal is compared to a trip level threshold exceeding the determined noise level to declare a valid trial event. The squaring of the derivative emphasizes the differences between noise and signal, and the valid event is preferably declared when the trip threshold has been exceeded over a temporal comb width to realize a comb over a given time period. Once a trial event has been declared, the event is verified through a spatial comb, which applies the temporal event comb to additional stations. The detector according to the present invention quickly and accurately detects initial compressional waves indicative of a microseismic event which typically exceed the ambient cultural noise level by a small amount, and distinguishes the waves from subsequent larger amplitude shear waves. 9 figs.
Jiang, Boyang
2012-02-14T23:59:59.000Z
As the forecasting models become more sophisticated in their physics and possible depictions of the nearshore hydrodynamics, they also become increasingly sensitive to errors in the inputs. These input errors include: mis-specification of the input...
Phosphine oxide derivatives as hosts for blue phosphors: A joint...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
oxide derivatives as hosts for blue phosphors: A joint theoretical and experimental study of their electronic Phosphine oxide derivatives as hosts for blue phosphors: A joint...
Exploring Hydrogen Generation from Biomass-Derived Sugar and...
Office of Environmental Management (EM)
Exploring Hydrogen Generation from Biomass-Derived Sugar and Sugar Alcohols to Reduce Costs Exploring Hydrogen Generation from Biomass-Derived Sugar and Sugar Alcohols to Reduce...
Low-Emissions Burner Technology using Biomass-Derived Liquid...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Low-Emissions Burner Technology using Biomass-Derived Liquid Fuels Low-Emissions Burner Technology using Biomass-Derived Liquid Fuels This factsheet describes a project that...
Progress toward Biomass and Coal-Derived Syngas Warm Cleanup...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Progress toward Biomass and Coal-Derived Syngas Warm Cleanup: Proof-of-Concept Process Demonstration of Multicontaminant Removal Progress toward Biomass and Coal-Derived Syngas...
Interaction of coal-derived synthesis gas impurities with solid...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
coal-derived synthesis gas impurities with solid oxide fuel cell metallic components. Interaction of coal-derived synthesis gas impurities with solid oxide fuel cell metallic...
Bio-Derived Liquids to Hydrogen Distributed Reforming Targets...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Targets (Presentation) Bio-Derived Liquids to Hydrogen Distributed Reforming Targets (Presentation) Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming...
Agenda for the Derived Liquids to Hydrogen Distributed Reforming...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review Agenda for the Derived Liquids to Hydrogen Distributed...
Detailed Characterization of Lubricant-Derived Ash-Related Species...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Characterization of Lubricant-Derived Ash-Related Species in Diesel Exhaust and Aftertreatment Systems Detailed Characterization of Lubricant-Derived Ash-Related Species in Diesel...
BILIWG Meeting: High Pressure Steam Reforming of Bio-Derived...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
High Pressure Steam Reforming of Bio-Derived Liquids (Presentation) BILIWG Meeting: High Pressure Steam Reforming of Bio-Derived Liquids (Presentation) Presented at the 2007...
On the Importance of Considering Measurement Errors in a Fuzzy Logic System for Scientific Applications in Nuclear Fusion
Ability of stabilizer quantum error correction to protect itself from its own imperfection
Yuichiro Fujiwara
2014-12-02T23:59:59.000Z
The theory of stabilizer quantum error correction allows us to actively stabilize quantum states and simulate ideal quantum operations in a noisy environment. It is critical is to correctly diagnose noise from its syndrome and nullify it accordingly. However, hardware that performs quantum error correction itself is inevitably imperfect in practice. Here, we show that stabilizer codes possess a built-in capability of correcting errors not only on quantum information but also on faulty syndromes extracted by themselves. Shor's syndrome extraction for fault-tolerant quantum computation is naturally improved. This opens a path to realizing the potential of stabilizer quantum error correction hidden within an innocent looking choice of generators and stabilizer operators that have been deemed redundant.
Stability of error bounds for semi-infinite convex constraint systems
2010-01-07T23:59:59.000Z
stable if all its “small” perturbations admit a (local or global) error bound. ... where T is a compact, possibly infinite, Hausdorff space, ft : Rn ? R, t ? T, are given ...
The Effect of OCR Errors on Stylistic Text Classification Sterling Stuart Stein
The Effect of OCR Errors on Stylistic Text Classification Sterling Stuart Stein Linguistic retrieval; Taghva and Coombs [1] found that a search engine could be made to work well over OCR documents
An Analysis of the Effect of Gaussian Error in Object Recognition
Sarachik, Karen Beth
1994-02-01T23:59:59.000Z
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition ...
Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error
Ma, Liang
2012-02-14T23:59:59.000Z
This thesis proposes a methodology for determining sensitivity of building energy consumption of HVAC systems to sensor error. It is based on a series of simulations of a generic building, the model for which is based on several typical input...
Error and uncertainty in estimates of Reynolds stress using ADCP in an energetic ocean state
Rapo, Mark Andrew.
2006-01-01T23:59:59.000Z
(cont.) To that end, the space-time correlations of the error, turbulence, and wave processes are developed and then utilized to find the extent to which the environmental and internal processing parameters contribute to ...
Estimating market power in homogeneous product markets using a composed error model
Orea, Luis; Steinbuks, Jevgenijs
2012-04-25T23:59:59.000Z
This study contributes to the literature on estimating market power in homogenous product markets. We estimate a composed error model, where the stochastic part of the firm?s pricing equation is formed by two random variables...
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models AndyNovobilski,Ph.D.
Fernandez, Thomas
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models Andy computfi~gas a methodof data mining,is its intrinsic ability to drive modelselection accordingto a mixedset of criteria. Basedon natural selection, evolutionary computing utilizes evaluationof candidatesolutions
Locatelli, R.
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...
Gilles Lachaud For detecting and correcting the inevitable errors which creep in during
Provence Aix-Marseille I, Université de
Gilles Lachaud For detecting and correcting the inevitable errors which creep in during digital by the grea- test possible number of discs of the same size without any overlaps. #12;The words of a message
On the evaluation of human error probabilities for post-initiating events
Presley, Mary R
2006-01-01T23:59:59.000Z
Quantification of human error probabilities (HEPs) for the purpose of human reliability assessment (HRA) is very complex. Because of this complexity, the state of the art includes a variety of HRA models, each with its own ...
Combined wavelet video coding and error control for internet streaming and multicast
Chu, Tianli
2002-01-01T23:59:59.000Z
an integrated approach toward Internet video streaming and multicast based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder, by incorporating packetization and layered coding, to facilitate its integration...
Combined wavelet video coding and error control for internet streaming and multicast
Chu, Tianli
2002-01-01T23:59:59.000Z
In the past several years, advances in Internet video streaming have been tremendous. Originally designed without error protection, Receiver-driven layered multicast (RLM) has proved to be a very effective scheme for scalable video multicast. Though...
V-194: Citrix XenServer Memory Management Error Lets Local Administrat...
Broader source: Energy.gov (indexed) [DOE]
a memory management page reference counting error to gain access on the target host server. IMPACT: A local user on the guest operating system can obtain access on the target...
Effects of systematic phase errors on optimized quantum random-walk search algorithm
Yu-Chao Zhang; Wan-Su Bao; Xiang Wang; Xiang-Qun Fu
2015-01-09T23:59:59.000Z
This paper researches how the systematic errors in phase inversions affect the success rate and the number of iterations in optimized quantum random-walk search algorithm. Through geometric description of this algorithm, the model of the algorithm with phase errors is established and the relationship between the success rate of the algorithm, the database size, the number of iterations and the phase error is depicted. For a given sized database, we give both the maximum success rate of the algorithm and the required number of iterations when the algorithm is in the presence of phase errors. Through analysis and numerical simulations, it shows that optimized quantum random-walk search algorithm is more robust than Grover's algorithm.
Analysis of atmospheric delays and asymmetric positioning errors in the global positioning system
Materna, Kathryn
2014-01-01T23:59:59.000Z
Abstract Errors in modeling atmospheric delays are one of the limiting factors in the accuracy of GPS position determination. In regions with uneven topography, atmospheric delay phenomena can be especially complicated. ...
Efficient error correction for speech systems using constrained re-recognition
Yu, Gregory T
2008-01-01T23:59:59.000Z
Efficient error correction of recognition output is a major barrier in the adoption of speech interfaces. This thesis addresses this problem through a novel correction framework and user interface. The system uses constraints ...
Title and author(s) Notes on Human Error Analysis and
calibration and testing as found in the US Licensee Event Reports. Available on request from Risø Library JUDGEMENT 4 "HUMAN ERROR" - DEFINITION AND CLASSIFICATION 6 RELIABILITY AND SAFETY ANALYSIS 10 HUMAN FACTORS
Grid-search event location with non-Gaussian error models
Rodi, William L.
This study employs an event location algorithm based on grid search to investigate the possibility of improving seismic event location accuracy by using non-Gaussian error models. The primary departure from the Gaussian ...
Verifica(on of Hurricane Irene, Isaac and Sandy's Storm Track, Intensity, and Wind Radii Errors
Miami, University of
Verifica(on of Hurricane Irene, Isaac and Sandy's Storm Track, Intensity/onal Hurricane Center (NHC). Forecasts of the track have steadily improved over the past, intensity (MWND) and wind radii (WRAD) errors of Hurricane Irene (2011
Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error
Ma, Liang
2012-02-14T23:59:59.000Z
This thesis proposes a methodology for determining sensitivity of building energy consumption of HVAC systems to sensor error. It is based on a series of simulations of a generic building, the model for which is based on several typical input...
Error Detection Techniques Applicable in an Architecture Framework and Design Methodology for
Ould Ahmedou, Mohameden
/environmental variations and external radiation caus- ing so-called soft-errors. Overall, these trends result in a severe in analogy to the IP library of the functional layer shall eventually represent an autonomic IP library (AE
Error analysis of motion transmission mechanisms : design of a parabolic solar trough
Koniski, Cyril (Cyril A.)
2009-01-01T23:59:59.000Z
This thesis presents the error analysis pertaining to the design of an innovative solar trough for use in solar thermal energy generation fields. The research was a collaborative effort between Stacy Figueredo from Prof. ...
Conway, Barbara Tenney
2012-10-19T23:59:59.000Z
.............................................................................. x NOMENCLATURE ............................................................................. xi CHAPTER I INTRODUCTION ............................................................ 1 Theories of Spelling Development... levels in school. Stage theory of spelling development has provided a solid structure upon which spelling curricula can be designed, and spelling error analysis serves as the foundational screening component for planning of instruction (Bear...
Havinga, Paul J.M.
Abstract -- Since high error rates are inevitable to the wireless environment, energy mechanisms only, but the required extra energy consumed by the wireless interface should be incorporated energy consumption is a key issue for portable wireless network devices like computers like PDAs
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01T23:59:59.000Z
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Estimating rock properties in two phase petroleum reservoirs: an error analysis
Paul, Anthony Ian
1983-01-01T23:59:59.000Z
ESTIMATING ROCK PROPERTIES IN TWO PHASE PETROLEUM RESERVOIRS: AN ERROR ANALYSIS A Thesis by ANTHONY IAN PAUL Submitted to the Graduate College of Texas AE:M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE December 1983 Maior Subjecu Chemical Engineering ESTIMATING ROCK PROPERTIES IN TWO PHASE PETROLEUM RESERVOIRS: AN ERROR ANALYSIS A Thesis by ANTHONY IAN PAUL Approved as to style and content by: A. T. Watson (Chairman of Commiuee) C. J...
Gavini, Shanti
2001-01-01T23:59:59.000Z
CODE ASSIGNMENT OF RATE COMPATIBLE PUNCTURED CONVOLUTIONAL CODES F' OR UNEQUAL ERROR PROTECTION REQUIRElvIENTS A Thesis by SHANTI GAVIUI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of thc... requirements for the degree of MASTER OF SCIENCE May 2001 Major Subject: Electrical Engineering CODE ASSIGNMENT OF RATE COMPATIBLE PUNCTURED CONVOLUTIONAL CODES FOR UNEQUAL ERROR PROTECTION REQUIREMENTS A Thesis by SHANTI GAVINI Submitted to Texas A...
Analysis of error in using fractured gas well type curves for constant pressure production
Schkade, David Wayne
1987-01-01T23:59:59.000Z
ANALYSIS DF ERROR IN USING FRACTURED GAS WELL TYPE CURVES FOR CONSTANT PRESSURE PRODUCTION A Thesis by DAVID WAYNE SCHKADE Submitted to the Graduate College of Texas ASM University in partial fulfillment of the requirements for the degree... of MASTER OF SCIENCE May 1987 Major Subject: Petroleum Engineering ANALYSIS OF ERROR IN USING FRACTURED GAS WELL TYPE CURVES FOR CONSTANT PRESSURE PRDDUCTION A Thesis by DAVID WAYNE SCHKADE Approved as to style and content by: S. A. Ho lditch...
Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas
E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team
2010-05-19T23:59:59.000Z
The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E × B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.
Design consistency and driver error as reflected by driver workload and accident rates
Wooldridge, Mark Douglas
1992-01-01T23:59:59.000Z
DESIGN CONSISTENCY AND DRIVER ERROR AS REFLECTED BY DRIVER WORKLOAD AND ACCIDENT RATES A Thesis by MARK DOUGLAS WOOLDRIDGE Approved as to style and content by: Daniel B. Fambro (Chair of Committee) Raymond A. Krammes (Member) Olga J.... Pendleton (Member) James T. P. Yao (Head of Department) May 1992 ABSTRACT Design Consistency and Driver Error as Reflected by Driver Workload and Accident Rates (May 1992) Mark Douglas Wooldridge, B. S. , Texas A&M University Chair of Advisory...
Kaeli, David R.
Case Study: Soft Error Rate Analysis in Storage Systems Brian Mullins, Hossein Asadi, Mehdi B Soft errors due to cosmic particles are a growing relia- bility threat for VLSI systems. In this paper we analyze the soft error vulnerability of FPGAs used in storage systems. Since the reliability
Kaeli, David R.
Case Study: Soft Error Rate Analysis in Storage Systems Brian Mullins, Hossein Asadi, Mehdi B the soft error vulnerability of FPGAs used in storage systems. Since the reliability requirements of such systems play a critical role in overall system reliability. We have val idated soft error projections
Confirmation of standard error analysis techniques applied to EXAFS using simulations
Booth, Corwin H; Hu, Yung-Jin
2009-12-14T23:59:59.000Z
Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.
A derivative standard for polarimeter calibration
Mulhollan, G.; Clendenin, J.; Saez, P. [and others
1996-10-01T23:59:59.000Z
A long-standing problem in polarized electron physics is the lack of a traceable standard for calibrating electron spin polarimeters. While several polarimeters are absolutely calibrated to better than 2%, the typical instrument has an inherent accuracy no better than 10%. This variability among polarimeters makes it difficult to compare advances in polarized electron sources between laboratories. The authors have undertaken an effort to establish 100 nm thick molecular beam epitaxy grown GaAs(110) as a material which may be used as a derivative standard for calibrating systems possessing a solid state polarized electron source. The near-bandgap spin polarization of photoelectrons emitted from this material has been characterized for a variety of conditions and several laboratories which possess well calibrated polarimeters have measured the photoelectron polarization of cathodes cut from a common wafer. Despite instrumentation differences, the spread in the measurements is sufficiently small that this material may be used as a derivative calibration standard.
Derivation of evolutionary payoffs from observable behavior
Feigel, Alexander; Engel, Assaf
2008-01-01T23:59:59.000Z
Interpretation of animal behavior, especially as cooperative or selfish, is a challenge for evolutionary theory. Strategy of a competition should follow from corresponding Darwinian payoffs for the available behavioral options. The payoffs and decision making processes, however, are difficult to observe and quantify. Here we present a general method for the derivation of evolutionary payoffs from observable statistics of interactions. The method is applied to combat of male bowl and doily spiders, to predator inspection by sticklebacks and to territorial defense by lions, demonstrating animal behavior as a new type of game theoretical equilibrium. Games animals play may be derived unequivocally from their observable behavior, the reconstruction, however, can be subjected to fundamental limitations due to our inability to observe all information exchange mechanisms (communication).
Enhanced Coset Symmetries and Higher Derivative Corrections
Neil Lambert; Peter West
2006-08-17T23:59:59.000Z
After dimensional reduction to three dimensions, the lowest order effective actions for pure gravity, M-theory and the Bosonic string admit an enhanced symmetry group. In this paper we initiate study of how this enhancement is affected by the inclusion of higher derivative terms. In particular we show that the coefficients of the scalar fields associated to the Cartan subalgebra are given by weights of the enhanced symmetry group.
Triamine chelants, their derivatives, complexes and conjugates
Troutner, D.E.; John, C.S.; Pillai, M.R.A.
1995-03-07T23:59:59.000Z
A group of functionalized triamine chelants and their derivatives that form complexes with radioactive metal ions are disclosed. The complexes can be covalently attached to a protein or an antibody or antibody fragment and used for therapeutic and/or diagnostic purposes. The chelants are of the formula, as shown in the accompanying diagrams, wherein n, m, R, R{sup 1}, R{sup 2} and L are defined in the specification.
Equivalence of Conventionally-Derived and Parthenote-Derived Human Embryonic Stem Cells
2011-01-01T23:59:59.000Z
Equivalence of Conventionally-Derived and Parthenote-6 | Issue 1 | e14499 Equivalence of hESC and phESC Figure 4.to determine points of equivalence and differences between
Transformation of spatial and perturbation derivatives of travel time
Cerveny, Vlastislav
Transformation of spatial and perturbation derivatives of travel time at a general interface and perturbation parameters. We derive the explicit equations for transforming these traveltime derivatives Hamiltonian function and are applicable to the transformation of traveltime derivatives in both isotropic
A new method for deriving the stellar birth function of resolved stellar populations
Gennaro, Mario; Brown, Tom; Gordon, Karl
2015-01-01T23:59:59.000Z
We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star-formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Compared to most of the tools used to study resolved stellar populations, our method avoids binning stars in the color-magnitude diagram and uses the entirety of the information (i.e., the whole likelihood function) for each data point; the proper combination of the individual likelihoods allows the computation of the posterior probability for the global population parameters. This inc...
Detecting bit-flip errors in a logical qubit using stabilizer measurements
D. Ristè; S. Poletto; M. -Z. Huang; A. Bruno; V. Vesterinen; O. -P. Saira; L. DiCarlo
2014-11-20T23:59:59.000Z
Quantum data is susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction (QEC) to actively protect against both. In the smallest QEC codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Experimental demonstrations of QEC to date, using nuclear magnetic resonance, trapped ions, photons, superconducting qubits, and NV centers in diamond, have circumvented stabilizers at the cost of decoding at the end of a QEC cycle. This decoding leaves the quantum information vulnerable to physical qubit errors until re-encoding, violating a basic requirement for fault tolerance. Using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. We construct these stabilizers as parallelized indirect measurements using ancillary qubits, and evidence their non-demolition character by generating three-qubit entanglement from superposition states. We demonstrate stabilizer-based quantum error detection (QED) by subjecting a logical qubit to coherent and incoherent bit-flip errors on its constituent physical qubits. While increased physical qubit coherence times and shorter QED blocks are required to actively safeguard quantum information, this demonstration is a critical step toward larger codes based on multiple parity measurements.
Huang, Weidong
2011-01-01T23:59:59.000Z
This paper presents the general equation to calculate the standard deviation of reflected ray error from optical error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 8 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope errors in two direction is transferred to any one direction of the focus ray when the incidence angle is more than 0 for solar trough and heliostats reflector; for point focus Fresnel lens, point focus parabolic glass mirror, line focus parabolic galss mirror, the error transferring coefficient from optical to focus ray will increase when the rim angle increase; for TIR-R concentrator, it will decrease; for glass heliostat, it relates to the incidence angle and azimuth of the reflecting point. Keywords: optic error, standard deviation, refractive ray error, concentrated solar collector
S. D. Bloom; D. A. Dale; R. Cool; K. Dupczak; C. Miller; A. Haugsjaa; C. Peters; M. Tornikoski; P. Wallace; M. Pierce
2004-04-02T23:59:59.000Z
We present the most recent results of an optical survey of the position error contours ("error boxes") of unidentified high energy gamma-ray sources.
Derivation of an Applied Nonlinear Schroedinger Equation.
Pitts, Todd Alan; Laine, Mark Richard; Schwarz, Jens; Rambo, Patrick K.; Karelitz, David B.
2015-01-01T23:59:59.000Z
We derive from first principles a mathematical physics model useful for understanding nonlinear optical propagation (including filamentation). All assumptions necessary for the development are clearly explained. We include the Kerr effect, Raman scattering, and ionization (as well as linear and nonlinear shock, diffraction and dispersion). We explain the phenomenological sub-models and each assumption required to arrive at a complete and consistent theoretical description. The development includes the relationship between shock and ionization and demonstrates why inclusion of Drude model impedance effects alters the nature of the shock operator. Unclassified Unlimited Release
Deriving time from the geometry of space
James M. Chappell; John G. Hartnett; Nicolangelo Iannella; Derek Abbott
2015-04-08T23:59:59.000Z
The Minkowski formulation of special relativity reveals the essential four-dimensional nature of spacetime, consisting of three space and one time dimension. Recognizing its fundamental importance, a variety of arguments have been proposed over the years attempting to derive the Minkowski spacetime structure from fundamental physical principles. In this paper we illustrate how Minkowski spacetime follows naturally from the geometric properties of three dimensional Clifford space modeled with multivectors. This approach also generalizes spacetime to an eight dimensional space as well as doubling the size of the Lorentz group. This description of spacetime also provides a new geometrical interpretation of the nature of time.
Inflationary Universe in Higher Derivative Induced Gravity
W. F. Kao
2000-06-27T23:59:59.000Z
In an induced-gravity model, the stability condition of an inflationary slow-rollover solution is shown to be $\\phi_0 \\partial_{\\phi_0}V(\\phi_0)=4V(\\phi_0)$. The presence of higher derivative terms will, however, act against the stability of this expanding solution unless further constraints on the field parameters are imposed. We find that these models will acquire a non-vanishing cosmological constant at the end of inflation. Some models are analyzed for their implication to the early universe.
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01T23:59:59.000Z
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
TIM3 Front-Panel 1. VE: Flash for VME bus access error OR On for Geog-Addr error (i.e. wrong slot).
University College London
-Busy (note: In Stand-Alone Mode TIM is normally busy) 4. TB: Shows status of TIM-BusyOut All LEDs (apart from power supplies) have a 60ms pulse stretcher for better visibility. -5 -12 OR VE SA SC TB CA BR SP +5 +3 Error1 -5V, -12V Power On Stand-Alone Mode Enabled Stand-Alone Clock Present TIM BusyOut4 ROD Busy's (1
Kalman-predictive-proportional-integral-derivative (KPPID)
Fluerasu, A.; Sutton, M. (McGill)
2004-12-17T23:59:59.000Z
With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.
HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD
Harold S. Blackman; David I. Gertman; Ronald L. Boring
2008-09-01T23:59:59.000Z
This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.
Fade-resistant forward error correction method for free-space optical communications systems
Johnson, Gary W. (Livermore, CA); Dowla, Farid U. (Castro Valley, CA); Ruggiero, Anthony J. (Livermore, CA)
2007-10-02T23:59:59.000Z
Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G. (Mount Kisco, NY); Chen, Dong (Croton On Husdon, NY); Coteus, Paul W. (Yorktown Heights, NY); Flynn, William T. (Rochester, MN); Marcella, James A. (Rochester, MN); Takken, Todd (Brewster, NY); Trager, Barry M. (Yorktown Heights, NY); Winograd, Shmuel (Scarsdale, NY)
2011-08-30T23:59:59.000Z
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Hodge, B. M.; Lew, D.; Milligan, M.
2013-01-01T23:59:59.000Z
Load forecasting in the day-ahead timescale is a critical aspect of power system operations that is used in the unit commitment process. It is also an important factor in renewable energy integration studies, where the combination of load and wind or solar forecasting techniques create the net load uncertainty that must be managed by the economic dispatch process or with suitable reserves. An understanding of that load forecasting errors that may be expected in this process can lead to better decisions about the amount of reserves necessary to compensate errors. In this work, we performed a statistical analysis of the day-ahead (and two-day-ahead) load forecasting errors observed in two independent system operators for a one-year period. Comparisons were made with the normal distribution commonly assumed in power system operation simulations used for renewable power integration studies. Further analysis identified time periods when the load is more likely to be under- or overforecast.
Error Channels and the Threshold for Fault-tolerant Quantum Computation
Bryan Eastin
2007-10-15T23:59:59.000Z
This dissertation treats the topics of threshold calculation, ancilla construction, and non-standard error models. Chapter 2 introduces background material ranging from quantum mechanics to classical coding to thresholds for quantum computation. In Chapter 3 numerical and analytical means are used to generate estimates of and bounds on the threshold given an error model described by a restricted stochastic Pauli channel. Chapter 4 develops a simple, flexible means of estimating the threshold and applies it to some cases of interest. Finally, a novel method of ancilla construction is proposed in Chapter 5, and the difficulties associated with implementing it are discussed.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10T23:59:59.000Z
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Lobach, Iryna
2009-05-15T23:59:59.000Z
) are binary and probability of disease is known. Environmental variable is measured with error with misclassi cation probabilities pr(W = 0jX = 1) = 0:20 and pr(W = 1jX = 0) = 0:10. The results are based on a simulation study with 500 replications for 1000... variant (G), and environmental covariate (X) are binary and probability of disease is unknown. Environmen- tal variable is measured with error with misclassi cation probabilities pr(W = 0jX = 1) = 0:20 and pr(W = 1jX = 0) = 0:10. The results are based on a...
Estimating rock properties in two phase petroleum reservoirs: an error analysis
Paul, Anthony Ian
1983-01-01T23:59:59.000Z
by the same amount from the true porosity value. In Fig. 5, the objective function is slightly better represented by the series approximation in 1/4. A Monte Carlo study was performed using the same history matching conditions as for the permeability... estimates were used in a Monte Carlo study to calculate the predicted well values, after the history matching period. The errors in the rock property estimates increases rapidly with an increasing number of unknowns. In many cases, even when large errors...
Sequence decoding in the presence of timing errors for NRZ signaling
Kinard, Barbara Kay
1990-01-01T23:59:59.000Z
SEQUENCE DECODING IN THE PRESENCE OF TIMING ERRORS FOR NRZ SIGNALING A Thesis by BARBARA KAY KINARD Submitted to the Office of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 1990 Major Subject: Electrical Engineering SEQUENC'E DECODING IN THE PRESENCE OF TIMIVG ERRORS FOR NRZ SIGNALING A Thesis by BARBARA I&AY KINARD Approved as to style and content by: ostas N. Georg iades (C'hair of C'ommittee) i...
Generalized Holographic Superconductors with Higher Derivative Couplings
Anshuman Dey; Subhash Mahapatra; Tapobrata Sarkar
2014-06-13T23:59:59.000Z
We introduce and study generalized holographic superconductors with higher derivative couplings between the field strength tensor and a complex scalar field, in four dimensional AdS black hole backgrounds. We study this theory in the probe limit, as well as with backreaction. There are multiple tuning parameters in the theory, and with two non-zero parameters, we show that the theory has a rich phase structure, and in particular, the transition from the normal to the superconducting phase can be tuned to be of first order or of second order within a window of one of these. This is established numerically as well as by computing the free energy of the boundary theory. We further present analytical results for the critical temperature of the model, and compare these with numerical analysis. Optical properties of this system are also studied numerically in the probe limit, and our results show evidence for negative refraction at low frequencies.
Chaotic inflation in higher derivative gravity theories
Myrzakul, Shynaray; Sebastiani, Lorenzo
2015-01-01T23:59:59.000Z
In this paper, we investigate chaotic inflation from scalar field subjected to potential in the framework of $f(R^2, P, Q)$-gravity, where we add a correction to Einstein's gravity based on a function of the square of the Ricci scalar $R^2$, the contraction of the Ricci tensor $P$, and the contraction of the Riemann tensor $Q$. The Gauss-Bonnet case is also discussed. We give the general formalism of inflation, deriving the slow-roll parameters, the $e$-folds number, and the spectral indexes. Several explicit examples are furnished, namely we will consider the cases of massive scalar field and scalar field with quartic potential and some power-law function of the curvature invariants under investigation in the gravitational action of the theory. Viable inflation according with observations is analyzed.
Scrap tire derived fuel: Markets and issues
Serumgard, J. [Scrap Tire Management Council, Washington, DC (United States)
1997-12-01T23:59:59.000Z
More than 250 million scrap tires are generated annually in the United States and their proper management continues to be a solid waste management concern. Sound markets for scrap tires are growing and are consuming an ever increasing percentage of annual generation, with market capacity reaching more than 75% of annual generation in 1996. Of the three major markets - fuel, civil engineering applications, and ground rubber markets - the use of tires as a fuel is by far the largest market. The major fuel users include cement kilns, pulp and paper mills, electrical generation facilities, and some industrial facilities. Current issues that may impact the tire fuel market include continued public concern over the use of tires as fuels, the new EPA PM 2.5 standard, possible additional Clean Air emissions standards, access to adequate supplies of scrap tires, quality of processed tire derived fuel, and the possibility of creating a commodity market through the development of ASTM TDF standards.
Cationically polymerizable monomers derived from renewable sources
Crivello, J.V.
1991-10-01T23:59:59.000Z
The objective of this project is to make use of products obtained from renewable plant sources as monomers for the direct production of polymers which can be used for a wide range of plastic applications. In this report is described progress in the synthesis and polymerization of cationically polymerizable monomers and oligomers derived from botanical oils, terpenes, natural rubber, and lignin. Nine different botanical oils were obtained from various sources, characterized and then epoxidized. Their photopolymerization was carried out using cationic photoinitiators and the mechanical properties of the resulting polymers characterized. Preliminary biodegradation studies are being conducted on the photopolymerized films from several of these oils. Limonene was cationically polymerized to give dimers and the dimers epoxidized to yield highly reactive monomers suitable for coatings, inks and adhesives. The direct phase transfer epoxidation of squalene and natural rubber was carried out. The modified rubbers undergo facile photocrosslinking in the presence of onium salts to give crosslinked elastomers. 12 refs., 3 figs., 10 tabs.
Higher Derivative Corrections to O-Plane Actions
Wang, Zhao
2014-11-17T23:59:59.000Z
Higher derivative corrections to effective actions are very important and of great interest in string theory. The aim of this dissertation is to develop a method to constrain the higher derivative corrections to O-plane ...
Stability of Biomass-derived Black Carbon in Soils . | EMSL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Stability of Biomass-derived Black Carbon in Soils . Stability of Biomass-derived Black Carbon in Soils . Abstract: Black carbon (BC) may play an important role in the global C...
Managing Derived Data in the Gaea Scientific DBMS \\Lambda
Ward, Matthew
Managing Derived Data in the Gaea Scientific DBMS \\Lambda Nabil I. Hachem, Ke Qiu, Michael Gennert and managing sci entific data derivation histories as implemented in the Gaea scientific database management
Quantitative Analysis of Human Salivary Gland-Derived Intact...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Analysis of Human Salivary Gland-Derived Intact Proteome Using Top-Down Mass Spectrometry. Quantitative Analysis of Human Salivary Gland-Derived Intact Proteome Using Top-Down Mass...
Mjolsness, Eric
Symbolic Neural Networks Derived from Stochastic Grammar Domain Models 1 Symbolic Neural Networks neural network architectures with some of the expressive power of a semantic network and also some of the pattern recognition and learning capabilities of more conventional neural networks. For example
Cooper, S.E. [Science Application International Corp., Reston, VA (United States); Wreathall, J. [John Wreathall & Co., Dublin, OH (United States); Thompson, C.M., Drouin, M. [Nuclear Regulatory Commission, Washington, DC (United States); Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States)
1996-10-01T23:59:59.000Z
This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.
"DERIVATION" OF THE DE BROGLIE RELATION FROM THE DOPPLER EFFECT
Crawford, Frank S.
2013-01-01T23:59:59.000Z
BROGLIE RELATION FROM THE DOPPLER EFFECT Frank S. Crawfordusual derivation of the Doppler effect gives Eq. (l). (The
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2010-07-20T23:59:59.000Z
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Steven M Taylor
2007-04-10T23:59:59.000Z
Systematic error in calculation of z for high redshift type Ia supernovae could help explain unexpected luminosity values that indicate an accelerating rate of expansion of the universe.
Derivations of Marcus's formula G.F. Bertsch1
Bertsch George F.
Derivations of Marcus's formula G.F. Bertsch1 1 Institute for Nuclear Theory and Dept. of Physics, University of Washington, Seattle, Washington Abstract Two derivations of Marcus's formula for transition rates are presented. The first derivation is based on the Landau-Zener transition rate formula
Biomass-Derived Energy Products and Co-Products Market
Biomass-Derived Energy Products and Co-Products Market This report identifies the bio-fuels and co & Earth Science & Technology University of Hawai`i at Manoa #12;Biomass-Derived Energy Products and Co agency thereof. #12;Biomass Derived Energy Products and Co- Products Market and Off-take Study Hawaii
Automation of Nested Matrix and Derivative Operations Robert Kalaba
Tesfatsion, Leigh
Automation of Nested Matrix and Derivative Operations Robert Kalaba Departments of Electrical the automatic differentiation of hmctions expressed in terms of the derivatives d other fimctions. Building is introduced for the systematic exact evaluation of higher-order partial derivatives. Building on a key idea
Back-and-forth Operation of State Observers and Norm Estimation of Estimation Error
Back-and-forth Operation of State Observers and Norm Estimation of Estimation Error Hyungbo Shim with the plant, this paper proposes a state estimation algorithm that executes Luenberger observers in a back in the past have employed time-varying gains to over- come this problem [1], where the basic idea is to obtain
The Influence of Source and Cost of Information Access on Correct and Errorful Interactive Behavior
Gray, Wayne
USA +1 703 993 1357 gray@gmu.edu ABSTRACT Routine interactive behavior reveals patterns of interactionThe Influence of Source and Cost of Information Access on Correct and Errorful Interactive Behavior Wayne D. Gray & Wai-Tat Fu Human Factors & Applied Cognition George Mason University Fairfax, VA 22030
ERROR MODELS FOR LIGHT SENSORS BY STATISTICAL ANALYSIS OF RAW SENSOR MEASUREMENTS
Potkonjak, Miodrag
silicon solar cell that converts light impulses directly into electrical charges that can easily-based systems including calibration, sensor fusion and power management. We developed a system of statistical the standard procedure is to use error models to enable calibration, in a variant of our approach, we use
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent on
Santhanam, Balu
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent. Stability and convergence time issues of concern for ill- conditioned inputs. Leaky LMS AlgorithmLeaky LMS cost. Block LMS AlgorithmBlock LMS Algorithm Uses type-I polyphase components of the input u[n]: Block
Using CO2 spatial variability to quantify representation errors of satellite CO2 retrievals
Michalak, Anna M.
global data of column- averaged CO2 dry-air mole fraction (XCO2) at high spatial resolutions. These dataUsing CO2 spatial variability to quantify representation errors of satellite CO2 retrievals A. A 2008; published 29 August 2008. [1] Satellite measurements of column-averaged CO2 dry- air mole
Publish/Subscribe Systems on Node and Link Error-Prone
tower Cellular W ireless LA N #12;Motivations Mobile environments are error prone Â· Wireless link Â· Comparison pub/sub to client- server and polling models ES EBS ES/ ED Radio tower ES Cellular Wireless Node (,T) (cost of periodic publish or polling) s(n) (effect of sharing among n subscribers) tps (time
Signal Prvcessing An AlgebraicMethod for Compensatingfor Coil-Placement Errors in Three-
MacLean, W. James
), and then expressing eye position as a function of coil position. The coil position vectors can be used to form a rigid coils used - the first employs a secondary coil in the front coil which is effectively woundSignal Prvcessing An AlgebraicMethod for Compensatingfor Coil-Placement Errors in Three
Zollanvari, Amin
2012-02-14T23:59:59.000Z
formulation of the joint distribution of the true error of misclassification and two of its commonly used estimators, resubstitution and leave-one-out, as well as their marginal and mixed moments, in the context of the Linear Discriminant Analysis (LDA...
Error Bounds from Extra-Precise Iterative JAMES DEMMEL, YOZO HIDA, and WILLIAM KAHAN
Li, Xiaoye Sherry
prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access error bound for the computed solution. The completion of the new BLAS Technical Forum Standard has was supported in part by the NSF Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF
Smoothing Parameter Selection When Errors are Correlated and Application to Ozone Data
Heckman, Nancy E.
Smoothing Parameter Selection When Errors are Correlated and Application to Ozone Data by Robert Jr trend of daily and monthly ground ozone levels in southern Ontario. iii #12; Contents Abstract ii.2 Air Pollution Data . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2.1 Daily Ozone Data . . . . . . . . . . . . . . . . . . .
Error growth in poor ECMWF forecasts over the contiguous United States
Modlin, Norman Ray
1993-01-01T23:59:59.000Z
are found to have the majority of RMS growth on day I while poor forecasts do not experience rapid error growth until days 3 and 4. For poor forecasts, the leading EOFs reveal a wave pattern down stream of the Rocky Mountains. This pattern evolves...
PUBLISHED IN: PROCEEDINGS OF THE IEEE ICC 2013 1 Towards an Error Control Scheme for a
Chatziantoniou, Damianos
evaluation of its performance. An obvious use case for our scheme is the reliable delivery of softwarePUBLISHED IN: PROCEEDINGS OF THE IEEE ICC 2013 1 Towards an Error Control Scheme for a Publish for efficient content distribution. However, the design of efficient reliable transport protocols for multicast
ERRORS IN VIKING LANDER ATMOSPHERIC PROFILES DISCOVERED USING MOLA TOPOGRAPHY. Paul Withers1
Withers, Paul
ERRORS IN VIKING LANDER ATMOSPHERIC PROFILES DISCOVERED USING MOLA TOPOGRAPHY. Paul Withers1 , R. D above the spatially-varying martian topography, were used to constrain the reconstructed trajectory of martian topography pro- vided by the laser altimeter (MOLA) aboard the Mars Global Surveyor spacecraft
COS FUV01 Detector Errors and Recommended Actions Date: July 30, 2001
Colorado at Boulder, University of
COS FUV01 Detector Errors and Recommended Actions Date: July 30, 2001 Document Number: COS-11-0032 Revision: Initial Release Contract No.: NAS5-98043 CDRL No.: SE-05 Prepared By: K. Brownsberger, COS Sr. Software Scientist, CU/CASA Date Reviewed By: J. McPhate, COS FUV Detector Scientist, UCB Date Reviewed By
1997-2001 by M. Kostic Ch.5: Uncertainty/Error Analysis
Kostic, Milivoje M.
1 ©1997-2001 by M. Kostic Ch.5: Uncertainty/Error Analysis · Introduction · Bias and Precision Summation/Propagation (Expanded Combined Uncertainty) · Problem 5-30 ©1997-2001 by M. Kostic Ch.5) at corresponding Probability (%P) Remember: u = d%P = t,%PS (@ %P); z=t=d/S #12;2 ©1997-2001 by M. Kostic Bias
Error Tolerant Address Configuration for Data Center Networks with Malfunctioning Devices
Chen, Yan
Error Tolerant Address Configuration for Data Center Networks with Malfunctioning Devices Xingyu Ma to correct malfunctions and it can cause substantial operation delay of the whole data center. In this paper benefits because in most cases malfunctions in data centers only account for a very small portion
Alexander J. Silenko
2013-08-12T23:59:59.000Z
Analysis of spin dynamics in storage ring electric-dipole-moment (EDM) experiments ascertains that the use of initial vertical beam polarization allows to cancel spin-dependent systematical errors imitating the EDM effect. While the use of this polarization meets certain difficulties, it should be considered as an alternative or supplementary possibility of fulfilling the EDM experiment.
Low-Power and Error Coding for Network-on-Chip Traffic
Jantsch, Axel
frameworks of simulation perform- ance and power estimation, Section 4 presents the setup for simulation and tele- communication systems. There are several works, describing methods to esti- mate powerLow-Power and Error Coding for Network-on-Chip Traffic Arseni Vitkovski, Raimo Haukilahti, Axel
Bifurcated states of a rotating tokamak plasma in the presence of a static error-field
Fitzpatrick, Richard
Bifurcated states of a rotating tokamak plasma in the presence of a static error-field Richard, Texas 78712 Received 20 January 1998; accepted 1 June 1998 The bifurcated states of a rotating tokamak without hindrance. The response regime of a rotating tokamak plasma in the vicinity of the rational
Tullos, Desiree
DOWNSTREAM CHANNEL CHANGES AFTER A SMALL DAM REMOVAL: USING AERIAL PHOTOS AND MEASUREMENT ERROR and Ecological Engineering, Oregon State University, Corvallis, OR, USA ABSTRACT Dam removal is often implemented to assess downstream channel changes associated with a small dam removal. The Brownsville Dam, a 2.1 m tall
Lucy, D.; Pollard, A.M. Title: Further comments on the estimation of error
Lucy, David
address. Abstract: Many researchers in the field of forensic odontology have questioned the error with the gustafson dental age estimation method Journal: Journal of Forensic Sciences Date: 1995 Volume: 40(2) Pages of papers into the forensic literature all offering improvements to the basic Gustafson age estimation
Linking Error, Passage of Time, the Cerebellum and the Primary Motor
Shadmehr, Reza
i Linking Error, Passage of Time, the Cerebellum and the Primary Motor Cortex to the Multiple Timescales of Motor Memory By Sarah Hemminger A dissertation submitted to the Johns Hopkins University could account for a large body of behavioral data in numerous motor adaptation paradigms. The idea
Detecting Concurrency Errors in Client-side JavaScript Web Applications
issues are becoming more serious for web applications because a new web standard, HTML5, allows webDetecting Concurrency Errors in Client-side JavaScript Web Applications Shin Hong, Yongbae Park.park@kaist.ac.kr, moonzoo@cs.kaist.ac.kr Abstract--As web technologies have evolved, the complexity of dynamic web
Gross Error Detection in Chemical Plants and Refineries for On-Line Optimization
Pike, Ralph W.
Automation - FACS DOT Products, Inc. - NOVA #12;Distributed Control System Runs control algorithmthreetimesGross Error Detection in Chemical Plants and Refineries for On-Line Optimization Xueyu Chen, Derya, Baton Rouge, LA (February 28, 2003) #12;INTRODUCTION o Status of on-line optimization o Theoretical
Paper No. 12A-12 ERRORS IN DESIGN LEADING TO PILE FAILURES DURING SEISMIC LIQUEFACTION
Bolton, Malcolm
Paper No. 12A-12 1 ERRORS IN DESIGN LEADING TO PILE FAILURES DURING SEISMIC LIQUEFACTION Subhamoy.K) University of Cambridge (U.K) ABSTRACT Collapse of piled foundations in liquefiable soils has been observed. The current method of pile design under earthquake loading is based on a bending mechanism where the inertia
WEB-BASED VISUAL EXPLORATION AND ERROR DETECTION IN LARGE DATA SETS
Köbben, Barend
WEB-BASED VISUAL EXPLORATION AND ERROR DETECTION IN LARGE DATA SETS: ANTARCTIC ICEBERG TRACKING DATA AS A CASE Connie A. Blok, Ulanbek Turdukulov, Barend Köbben, Juan Luis Calle Pomares International The Netherlands blok@itc.nl; turdukulov@itc.nl Abstract Polar iceberg data are amongst others used
A Scalable Model for Timing Error Prediction under Hardware and Workload Variations
Gupta, Rajesh
Conservative guardbands Efficiency loss Resilient technique: 1) Predict&Prevent 2) Error ignorance Build reduction percentage for the Adder/Multiplier at (0.72V, 0°C)/(0.85V, 50°C) Bench mark Multiplier Adder SQRT/5 - Instruction level guardband reduction percentage at (0.72V, 0°C) / (0.85V, 50°C) regarding different
Critical Charge Characterization for Soft Error Rate Modeling in 90nm SRAM
Draper, Jeff
.witulski}@vanderbilt.edu Abstract-- Due to continuous technology scaling, the reduction of nodal capacitances and the lowering of power supply voltages result in an ever decreasing minimal charge capable of upsetting the logic state fast characteristic timing parameters are shown to result in conservative soft error rate predictions
Temporal Memoization for Energy-Efficient Timing Error Recovery in GPGPUs
Gupta, Rajesh
commonly use conservative guardbands for the operating frequency or voltage to ensure error-free operation therefore enables reduction of the minimum operating voltage [7]. Similarly, in non-volatile memory area%4%) and outperforms recent advances in resilient architectures. This technique also enhances robustness in the voltage
Calibration of Visually Guided Reaching Is Driven by Error-Corrective Learning and Internal Dynamics
Sabes, Philip
Calibration of Visually Guided Reaching Is Driven by Error-Corrective Learning and Internal Submitted 22 August 2006; accepted in final form 16 December 2006 Cheng S, Sabes PN. Calibration of visually3069, 2007. First published January 3, 2007; doi:10.1152/ jn.00897.2006. The sensorimotor calibration
A New Error Control Scheme for Packetized Voice over HighSpeed Local Area Networks
Liebeherr, Jörg
propose a new error control mechanism for packet voice, referred to as Slack ARQ (SARQ). SARQ is based or priority channels. It does not require hardware support, imposes little overhead on network resources use of network resources than circuit switching. Statistical multiplexing, however, causes delay
Iterative Dense Correspondence Correction Through Bundle Adjustment Feedback-Based Error Detection
Hess-Flores, M A; Duchaineau, M A; Goldman, M J; Joy, K I
2009-11-23T23:59:59.000Z
A novel method to detect and correct inaccuracies in a set of unconstrained dense correspondences between two images is presented. Starting with a robust, general-purpose dense correspondence algorithm, an initial pose estimate and dense 3D scene reconstruction are obtained and bundle-adjusted. Reprojection errors are then computed for each correspondence pair, which is used as a metric to distinguish high and low-error correspondences. An affine neighborhood-based coarse-to-fine iterative search algorithm is then applied only on the high-error correspondences to correct their positions. Such an error detection and correction mechanism is novel for unconstrained dense correspondences, for example not obtained through epipolar geometry-based guided matching. Results indicate that correspondences in regions with issues such as occlusions, repetitive patterns and moving objects can be identified and corrected, such that a more accurate set of dense correspondences results from the feedback-based process, as proven by more accurate pose and structure estimates.
Power Control by Kalman Filter With Error Margin for Wireless IP Networks
Leung, Kin K.
Power Control by Kalman Filter With Error Margin for Wireless IP Networks Kin K. Leung AT&T Labs, Room 4-120 100 Schulz Drive Red Bank, NJ 07701 Email: kkleung@research.att.com ABSTRACT A power-control enough due to little interference temporal correlation. In this paper, we enhance the power-control
Analysis of measurement errors for a superconducting phase qubit Qin Zhang,1 Abraham G. Kofman,1,
Martinis, John M.
Analysis of measurement errors for a superconducting phase qubit Qin Zhang,1 Abraham G. Kofman,1 of a superconducting flux- biased phase qubit. Insufficiently long measurement pulse may lead to nonadiabatic- veloping superconducting Josephson-junction circuits for quantum computation. A wide variety
Vibrotactile Feedback in Steering Wheel Reduces Navigation Errors during GPS-Guided Car Driving
Basdogan, Cagatay
Vibrotactile Feedback in Steering Wheel Reduces Navigation Errors during GPS-Guided Car Driving feedback displayed through the steering wheel of a car can reduce the perceptual and cognitive load with the GPS-based voice commands. KEYWORDS: vibrotactile, haptics, car navigation systems, GPS, steering wheel
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED
Kaplan, Alexey
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED ALTIMETRY MAPS be expressed in terms of the averages over model grid box areas. In reality, however, observations are either differently by the model grid and by the observational system. This difference turns out to be a major
Quantifying Errors Associated with Satellite Sampling of Offshore Wind S.C. Pryor1,2
1 Quantifying Errors Associated with Satellite Sampling of Offshore Wind Speeds S.C. Pryor1,2 , R, Bloomington, IN47405, USA. Tel: 1-812-855-5155. Fax: 1-812-855-1661 Email: spryor@indiana.edu 2 Dept. of Wind an attractive proposition for measuring wind speeds over the oceans because in principle they also offer
Theory and simulations of electrostatic field error transport Daniel H. E. Dubin
California at San Diego, University of
Theory and simulations of electrostatic field error transport Daniel H. E. Dubin Department are of cen- tral importance in plasma theory and experiment. For ex- ample, in the theory of neoclassical theory by equating the Joule heating power to the wave energy loss rate,12,13 with the regime of linear
A Probability Model For Errors in the Numerical Solutions of a Partial Di erential Equation
New York at Stoney Brook, State University of
into a petroleum reservoir, and observe the out ow, through production well(s). The rele- vant out ow variable permeability. We measure the solution error as the di#11;erence between the oil production rates (oil cut the extent to which the coarse grid oil production rate is suÆcient to distinguish among geologies
Development of methodology to correct sampling error associated with FRM PM10 samplers
Chen, Jing
2009-05-15T23:59:59.000Z
of the particle size distribution (PSD) and performance characteristics of the sampler (Buser, 2004). This research attempts to find a practical method to characterize and correct this error for the Federal Reference Method (FRM) PM10 sampler. First, a new dust...
Large-Scale Errors and Mesoscale Predictability in Pacific Northwest Snowstorms DALE R. DURRAN
Large-Scale Errors and Mesoscale Predictability in Pacific Northwest Snowstorms DALE R. DURRAN The development of mesoscale numerical weather prediction (NWP) models over the last two decades has made- search communities. Nevertheless, the predictability of the mesoscale features captured in such forecasts
Approximations for Bit Error Probabilities in SSMA Communication Systems Using Spreading
Keller, Gerhard
Approximations for Bit Error Probabilities in SSMA Communication Systems Using Spreading Sequences@mi.uni-erlangen.de Abstract-- In previous research, we considered SSMA (spread spectrum multiple access) communication systems of spread spectrum multiple access (SSMA) communication systems, the standard Gaussian approximation (SGA
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION (SHORT PAPER) Farinaz Koushanfar*
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION (SHORT PAPER) Farinaz Koushanfar* , Sasha Slijepcevic ESN tasks is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined ESN applications, including multi- modal sensor fusion, is to ensure that all of the techniques
Detection and Prediction of Errors in EPCs of the SAP Reference Model
van der Aalst, Wil
as a blueprint for roll-out projects of SAP's ERP system. It reflects Version18 4.6 of SAP R/3 which was marketedDetection and Prediction of Errors in EPCs of the SAP Reference Model J. Mendling a, H.M.W. Verbeek provide empirical evidence for these questions based on the SAP reference model. This model collection
Characterization and removal of errors due to local magnetic anomalies in directional drilling of Geophysics, Colorado School of Mines Summary Directional drilling has evolved over the last few decades utilizes a technique known as magnetic Measurement While Drilling (MWD). Vector measurements of geomagnetic
Liu, Hongyu
IN A CHEMICAL TRANSPORT MODEL Abstract. We propose a new methodology to characterize errors in chemical forecasts from a global tropospheric chemical transport model I. Bey Swiss Federal Institute in the representation of transport processes in chemical transport models. We con- strain the evaluation of a global
Schierup, Mikkel Heide
Correction for measurement error from genotyping-by-sequencing in genomic variance and genomic for Quantitative Genetics and Genomics, Department of Molecular Biology and Genetics, Aarhus University Denmark DLF-Trfolium, Store Heddinge, Denmark CENTER FOR QUANTITATIVE GENETICS AND GENOMICS F2 F2 Simulate sequencing Genotype
Automatic detection of dimension errors in spreadsheets Chris Chambers, Martin Erwig
Erwig, Martin
University, USA a r t i c l e i n f o Keywords: Spreadsheet Dimension Unit of measurement Static analysis Inference rule Error detection a b s t r a c t We present a reasoning system for inferring dimension information in spreadsheets. This system can be used to check the consistency of spreadsheet formulas and thus
PROPER FILTER DESIGN PROCEDURE FOR VIBRATION SUPPRESSION USING DELAY-ERROR-ORDER CURVES
Mavroidis, Constantinos
PROPER FILTER DESIGN PROCEDURE FOR VIBRATION SUPPRESSION USING DELAY-ERROR-ORDER CURVES D. Economou of Mechanical Engineering, Mechanical Design and Control Systems Division, 9 Heroon Polytechniou Str., 15773@central.ntua.gr B Rutgers University, The State University of New Jersey, Department of Mechanical and Aerospace
Error Analysis of Heat Transfer for Finned-Tube Heat-Exchanger Text-Board
Chen, Y.; Zhang, J.
2006-01-01T23:59:59.000Z
.5 PLn T T T=? + ? + Then () () 2 2 2'2 2 2 vqb 7235.425 8.2 0.0057 2PPT TT?? ??=++ + ????1gAPt? (13) We substitute the equation (13) into equation (10), and gain the max absolute error of air moisture content: () () 2 22 2'2 22 qb 1 g 0...
Error of the network approximation for densely packed composites with irregular geometry
Novikov, Alexei
properties such as the effective conductivity or the effective dielectric constant of composite materials the concentration of the filling inclusions is high is particularly relevant to polymer/ceramic composites, becauseError of the network approximation for densely packed composites with irregular geometry Leonid
ERROR ESTIMATES FOR A TIME DISCRETIZATION METHOD FOR THE RICHARDS' EQUATION
Eindhoven, Technische Universiteit
. The continuity condition t() + · (q) = 0 combined with Darcy law (1.1) leads to Richards' equation (1.2) tERROR ESTIMATES FOR A TIME DISCRETIZATION METHOD FOR THE RICHARDS' EQUATION IULIU SORIN POP' equation. Written in its saturation-based form, this nonlinear para- bolic equation models water flow
A POSTERIORI ERROR ESTIMATE FOR THE H(div) CONFORMING MIXED FINITE ELEMENT FOR THE COUPLED
Wang, Yanqiu
for the Darcy equation, and vice versa. Special techniques usually need to be employed. In [3], a conformingA POSTERIORI ERROR ESTIMATE FOR THE H(div) CONFORMING MIXED FINITE ELEMENT FOR THE COUPLED DARCY pro- posed for the coupled Darcy-Stokes flow in [30], which imposes normal con- tinuity
Error Control Coding in Low-Power Wireless Sensor Networks: When is ECC
Howard, Sheryl
. In crowded environments and office buildings, dCR drops significantly, to 3m or greater at 10 GHzError Control Coding in Low-Power Wireless Sensor Networks: When is ECC Energy-Efficient? Sheryl L. Interference is not considered; it would lower dCR. Analog decoders are shown to be the most energy-efficient
Quantum Error Correcting Codes and the Security Proof of the BB84 Protocol
Ramesh Bhandari
2014-08-30T23:59:59.000Z
We describe the popular BB84 protocol and critically examine its security proof as presented by Shor and Preskill. The proof requires the use of quantum error correcting codes called the Calderbank-Shor-Steanne (CSS) quantum codes. These quantum codes are constructed in the quantum domain from two suitable classical linear codes, one used to correct for bit-flip errors and the other for phase-flip errors. Consequently, as a prelude to the security proof, the report reviews the essential properties of linear codes, especially the concept of cosets, before building the quantum codes that are utilized in the proof. The proof considers a security entanglement-based protocol, which is subsequently reduced to a "Prepare and Measure" protocol similar in structure to the BB84 protocol, thus establishing the security of the BB84 protocol. The proof, however, is not without assumptions, which are also enumerated. The treatment throughout is pedagogical, and this report, therefore, serves a useful tutorial for researchers, practitioners, and students, new to the field of quantum information science, in particular, quantum cryptography, as it develops the proof in a systematic manner, starting from the properties of linear codes, and then advancing to the quantum error correcting codes, which are critical to the understanding of the security proof.
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01T23:59:59.000Z
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Kambhampati, Subbarao
Design Methodology to trade off Power, Output Quality and Error Resiliency: Application to Color,nbanerje,kaushik}@purdue.edu chaitali@asu.edu Abstract: Power dissipation and tolerance to process variations pose conflicting design-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems
Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations
Hartmann, Ralf
Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations Tobias-dimensional laminar aerodynamic flow simulations. The optimal order symmetric interior penalty discontinuous Galerkin laminar flows, see Sections 2 and 3 for the governing equations and the discretization
Three Quantities for Error Evaluation in Safety Critical Human Computer Interface 1
Schreiber, Fabio A.
1 Three Quantities for Error Evaluation in Safety Critical Human Computer Interface 1 Fabio A on the total dependability figures in safety critical systems: usability expressed as a function interface, learning difficulty, MTBF, safety critical systems, usability 1. Introduction The embedding
Renaut, Rosemary
at the electrodeelectrolyte interfaces of solid oxide fuel cells (SOFC) is investigated physically using Electrochemical describe the performance of a solid oxide fuel cell requires the solution of an inverse problem. TwoStability and error analysis of the polarization estimation inverse problem for solid oxide fuel
DRAM Errors in the Wild: A Large-Scale Field Study Bianca Schroeder
Toronto, University of
University of Toronto Toronto, Canada bianca@cs.toronto.edu Eduardo Pinheiro Google Inc. Mountain View, CA Wolf-Dietrich Weber Google Inc. Mountain View, CA ABSTRACT Errors in dynamic random access memory (DRAM that copies are not made or distributed for profit or commercial advantage and that copies bear this notice
Reducing the influence of microphone errors on in-situ ground impedance measurements
Vormann, Matthias
Reducing the influence of microphone errors on in- situ ground impedance measurements Roland Kruse. Keywords: Ground impedance; In-situ impedance measurement PACS 43.58.Bh #12;Introduction The acoustical. This problem is not specific to in-situ measurements but also applies to impedance tube measurements [9]. Two
Effect and minimization of errors in in-situ ground impedance measurements
Vormann, Matthias
Effect and minimization of errors in in-situ ground impedance measurements Roland Kruse, Volker method is a procedure to measure the surface impedance of grounds in-situ. In this article, the influence. #12;Keywords: Ground impedance; In-situ impedance measurement PACS 43.58.Bh Introduction The surface
JPEG Quality Transcoding using Neural Networks Trained with a Perceptual Error Measure
Lazzaro, John
JPEG Quality Transcoding using Neural Networks Trained with a Perceptual Error Measure John Lazzaro@cs.berkeley.edu Abstract A JPEG Quality Transcoder (JQT) converts a JPEG image file that was encoded with low image quality users direct control over the compression process, supporting trade- offs between image quality
Ultrasonic thickness measurements on corroded steel members: a statistical analysis of error
Konen, Keith Forman
1999-01-01T23:59:59.000Z
with measuring the wall thickness of a corroded tubular member and 2) determining how the strength calculations are affected by an error in a wall thickness measurement. This thesis is based on the first phase of a research project funded by Mineral Management...
Locally Testing Direct Products in the Low Error Range Weizmann Institute
Dinur, Irit
acceptance probability of the test. We show that even if the test passes with small probability, > 0Locally Testing Direct Products in the Low Error Range Irit Dinur Weizmann Institute Dept Given a function f : X , its -wise direct prod- uct is the function F = f : X defined by: F(x1
Detecting arbitrary quantum errors via stabilizer measurements on a sublattice of the surface code
A. D. Córcoles; Easwar Magesan; Srikanth J. Srinivasan; Andrew W. Cross; M. Steffen; Jay M. Gambetta; Jerry M. Chow
2014-10-23T23:59:59.000Z
To build a fault-tolerant quantum computer, it is necessary to implement a quantum error correcting code. Such codes rely on the ability to extract information about the quantum error syndrome while not destroying the quantum information encoded in the system. Stabilizer codes are attractive solutions to this problem, as they are analogous to classical linear codes, have simple and easily computed encoding networks, and allow efficient syndrome extraction. In these codes, syndrome extraction is performed via multi-qubit stabilizer measurements, which are bit and phase parity checks up to local operations. Previously, stabilizer codes have been realized in nuclei, trapped-ions, and superconducting qubits. However these implementations lack the ability to perform fault-tolerant syndrome extraction which continues to be a challenge for all physical quantum computing systems. Here we experimentally demonstrate a key step towards this problem by using a two-by-two lattice of superconducting qubits to perform syndrome extraction and arbitrary error detection via simultaneous quantum non-demolition stabilizer measurements. This lattice represents a primitive tile for the surface code, which is a promising stabilizer code for scalable quantum computing. Furthermore, we successfully show the preservation of an entangled state in the presence of an arbitrary applied error through high-fidelity syndrome measurement. Our results bolster the promise of employing lattices of superconducting qubits for larger-scale fault-tolerant quantum computing.