Absolute Percent Error Based Fitness Functions for Evolving Forecast Models AndyNovobilski,Ph.D.
Fernandez, Thomas
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models Andy computfi~gas a methodof data mining,is its intrinsic ability to drive modelselection accordingto a mixedset of criteria. Basedon natural selection, evolutionary computing utilizes evaluationof candidatesolutions
J. Rodnizki, D. Berkovits, K. Lavie, I. Mardor, A. Shor and Y. Yanay (Soreq NRC, Yavne), K. Dunkel, C. Piel (ACCEL, Bergisch Gladbach), A. Facco (INFN/LNL, Legnaro, Padova), V. Zviagintsev (TRIUMF, Vancouver)
AbstractBeam dynamics simulations of SARAF (Soreq Applied Research Accelerator Facility) superconducting RF linear accelerator have been performed in order to establish the accelerator design. The multi-particle simulation includes 3D realistic electromagnetic field distributions, space charge forces and fabrication, misalignment and operation errors. A 4 mA proton or deuteron beam is accelerated up to 40 MeV with a moderated rms emittance growth and a high real-estate gradient of 2 MeV/m. An envelope of 40,000 macro-particles is kept under a radius of 1.1 cm, well below the beam pipe bore radius. The accelerator design of SARAF is proposed as an injector for the EURISOL driver accelerator. The Accel 176 MHZ ?0=0.09 and ?0=0.15 HWR lattice was extended to 90 MeV based on the LNL 352 MHZ ?0=0.31 HWR. The matching between both lattices ensures smooth transition and the possibility to extend the accelerator to the required EURISOL ion energy.
Seshia, Sanjit A.
Design as You See FIT: System-Level Soft Error Analysis of Sequential Circuits Daniel Holcomb of the overall circuit can be computed from the CFIT and probabilities of system-level failure due to soft er,wenchaol,sseshia}@eecs.berkeley.edu Abstract Soft errors in combinational and sequential elements of dig- ital circuits are an increasing
Campbell, Andrew T.
process #12;#include #include pid_t pid = fork(); if (pid () failed */ } else if (pid == 0) { /* parent process */ } else { /* child process */ } #12;thread #12
Poinsot, Laurent
#include #include //Rappels : "getpid()" permet d'obtenir son propre pid // "getppid()" renvoie le pid du pÃ¨re d'un processus int main (void) { pid_t pid_fils; pid_fils = fork(); if(pid_fils==-1) { printf("Erreur de crÃ©ation du processus fils\
de Lijser, Peter
in a thesis or dissertation. 1. Left margin must be set 1.5 inches on every page, including appendices. 2. Use
Polynomial fits and the proton radius puzzle
E. Kraus; K. E. Mesick; A. White; R. Gilman; S. Strauch
2014-10-27T23:59:59.000Z
The Proton Radius Puzzle refers to the ~7{\\sigma} discrepancy that exists between the proton charge radius determined from muonic hydrogen and that determined from electronic hydrogen spectroscopy and electron-proton scattering. One possible partial resolution to the puzzle includes errors in the extraction of the proton radius from ep elastic scattering data. This possibility is made plausible by certain fits which extract a smaller proton radius from the scattering data consistent with that determined from muonic hydrogen. The reliability of some of these fits that yield a smaller proton radius was studied. We found that fits of form factor data with a truncated polynomial fit are unreliable and systematically give values for the proton radius that are too small. Additionally, a polynomial fit with a \\chi^2_{reduced} ~ 1 is not a sufficient indication for a reliable result.
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01T23:59:59.000Z
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15T23:59:59.000Z
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Uncertainty quantification and error analysis
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01T23:59:59.000Z
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24T23:59:59.000Z
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Simonen, Fredric A.; Gosselin, Stephen R.; Doctor, Steven R.
2013-04-22T23:59:59.000Z
This document describes a new method to determine whether the flaws in a particular reactor pressure vessel are consistent with the assumptions regarding the number and sizes of flaws used in the analyses that formed the technical justification basis for the new voluntary alternative Pressurized Thermal Shock (PTS) rule (Draft 10 CFR 50.61a). The new methodology addresses concerns regarding prior methodology because ASME Code Section XI examinations do not detect all fabrication flaws, they have higher detection performance for some flaw types, and there are flaw sizing errors always present (e.g., significant oversizing of small flaws and systematic under sizing of larger flaws). The new methodology allows direct comparison of ASME Code Section XI examination results with values in the PTS draft rule Tables 2 and 3 in order to determine if the number and sizes of flaws detected by an ASME Code Section XI examination are consistent with those assumed in the probabilistic fracture mechanics calculations performed in support of the development of 10 CFR 50.61a.
Pickett, Patrick T. (Kettering, OH)
1981-01-01T23:59:59.000Z
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Sandford, II, Maxwell T. (Los Alamos, NM); Handel, Theodore G. (Los Alamos, NM); Ettinger, J. Mark (Los Alamos, NM)
1999-01-01T23:59:59.000Z
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Albert, Réka
susceptibility to disease is homogenous across the network degree distribution is roughly symmetric much the network highly skewed distribution of contacts (follows a discretized Weibull distribution) most nodesIntroduction ERGM Model Fitting Simulation References Using social network characteristics
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21T23:59:59.000Z
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)
2011-10-01T23:59:59.000Z
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01T23:59:59.000Z
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
In situ repair of a failed compression fitting
Wolbert, R.R.; Jandrasits, W.G.
1985-08-05T23:59:59.000Z
A method and apparatus for the in situ repair of a failed compression fitting is provided. Initially, a portion of a guide tube is inserted coaxially in the bore of the compression fitting and locked therein. A close fit dethreading device is then coaxially mounted on the guide tube to cut the threads from the fitting. Thereafter, the dethreading device and guide tube are removed and a new fitting is inserted onto the dethreaded fitting with the body of the new fitting overlaying the dethreaded portion. Finally, the main body of the new fitting is welded to the main body of the old fitting whereby a new threaded portion of the replacement fitting is precisely coaxial with the old threaded portion. If needed, a bushing is located on the dethreaded portion which is sized to fit snugly between the dethreaded portion and the new fitting. Preferably, the dethreading device includes a cutting tool which is moved incrementally in a radial direction whereby the threads are cut from the threaded portion of the failed fitting in increments.
Olson, Eric J.
2013-06-11T23:59:59.000Z
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Huang, Weidong
2011-01-01T23:59:59.000Z
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Kirchhoff, William H. [Surface and Microanalysis Science Division, National Institute of Standards and Technology, 100 Bureau Drive, Stop 8370, Gaithersburg, Maryland 20899-8370 (United States)
2012-09-15T23:59:59.000Z
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals from the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.
Directionof Fit I. L. HUMBERSTONE
Fitelson, Branden
. January 1992 ? Oxford University Press 1992 Direction ofFit 1. L. HUMBERSTONE 1. Three quotations, by wayDirectionof Fit I. L. HUMBERSTONE 1. Threequotations,bywayof introduction In her seminal presentationof the distinction between what have since come widely to be called two "directionsof fit", Anscombe
In situ repair of a failed compression fitting
Wolbert, Ronald R. (McKees Rocks, PA); Jandrasits, Walter G. (Pittsburgh, PA)
1986-01-01T23:59:59.000Z
A method and apparatus for the in situ repair of a failed compression fitg is provided. Initially, a portion of a guide tube is inserted coaxially in the bore of the compression fitting and locked therein. A close fit dethreading device is then coaxially mounted on the guide tube to cut the threads from the fitting. Thereafter, the dethreading device and guide tube are removed and a new fitting is inserted onto the dethreaded fitting with the body of the new fitting overlaying the dethreaded portion. Finally, the main body of the new fitting is welded to the main body of the old fitting whereby a new threaded portion of the replacement fitting is precisely coaxial with the old threaded portion. If needed, a bushing is located on the dethreaded portion which is sized to fit snugly between the dethreaded portion and the new fitting. Preferably, the dethreading device includes a cutting tool which is moved incrementally in a radial direction whereby the threads are cut from the threaded portion of the failed fitting in increments.
Thermodynamics of error correction
Sartori, Pablo
2015-01-01T23:59:59.000Z
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Max...
Paul B. Slater
2007-03-26T23:59:59.000Z
Wu and Sprung (Phys. Rev. E 48, 2595 (1993)) reproduced the first 500 nontrivial Riemann zeros, using a one-dimensional local potential model. They concluded -- and similarly van Zyl and Hutchinson (Phys. Rev. E 67, 066211 (2003)) -- that the potential possesses a fractal structure of dimension d=3/2. We model the nonsmooth fluctuating part of the potential by the alternating-sign sine series fractal of Berry and Lewis A(x,g). Setting d=3/2, we estimate the frequency parameter (gamma), plus an overall scaling parameter (sigma) we introduce. We search for that pair of parameters (gamma,sigma) which minimizes the least-squares fit S_{n}(gamma,sigma) of the lowest n eigenvalues -- obtained by solving the one-dimensional stationary (non-fractal) Schrodinger equation with the trial potential (smooth plus nonsmooth parts) -- to the lowest n Riemann zeros for n =25. For the additional cases we study, n=50 and 75, we simply set sigma=1. The fits obtained are compared to those gotten by using just the smooth part of the Wu-Sprung potential without any fractal supplementation. Some limited improvement -- 5.7261 vs. 6.39207 (n=25), 11.2672 vs. 11.7002 (n=50) and 16.3119 vs. 16.6809 (n=75) -- is found in our (non-optimized, computationally-bound) search procedures. The improvements are relatively strong in the vicinities of gamma=3 and (its square) 9. Further, we extend the Wu-Sprung semiclassical framework to include higher-order corrections from the Riemann-von Mangoldt formula (beyond the leading, dominant term) into the smooth potential.
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01T23:59:59.000Z
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
Stabilizer Formalism for Operator Quantum Error Correction
Poulin, D
2005-01-01T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the language of the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure without affecting its essential properties. This opens the path to possible improvement of the error threshold of fault tolerant quantum computing. We also derive a modified Hamming bound that applies to all stabilizer codes, including degenerate ones.
Abdelhamid Awad Aly Ahmed, Sala
2008-10-10T23:59:59.000Z
QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major... Subject: Computer Science QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY...
Thermodynamics of error correction
Pablo Sartori; Simone Pigolotti
2015-04-24T23:59:59.000Z
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Model space truncation in shell-model fits
G. F. Bertsch; C. W. Johnson
2009-07-07T23:59:59.000Z
We carry out an interacting shell-model study of binding energies and spectra in the $sd$-shell nuclei to examine the effect of truncation of the shell-model spaces. Starting with a Hamiltonian defined in a larger space and truncating to the $sd$ shell, the binding energies are strongly affected by the truncation, but the effect on the excitation energies is an order of magnitude smaller. We then refit the matrix elements of the two-particle interaction to compensate for the space truncation, and find that it is easy to capture 90% of the binding energy shifts by refitting a few parameters. With the full parameter space of the two-particle Hamiltonian, we find that both the binding energies and the excitation energy can be fitted with remaining residual error about 5% of the average error from the truncation. Numerically, the rms initial error associated with our Hamiltonian is 3.4 MeV and the remaining residual error is 0.16 MeV. This is comparable to the empirical error found in $sd$-shell interacting shell model fits to experimental data\\cite{br06}.
Quantum Error Correction Workshop on
Grassl, Markus
Error Correction Avoiding Errors: Mathematical Model decomposition of the interaction algebra;Quantum Error Correction Designed Hamiltonians Main idea: "perturb the system to make it more stable" · fast (local) control operations = average Hamiltonian with more symmetry (cf. techniques from NMR
Clustered Error Correction of Codeword-Stabilized Quantum Codes
Yunfan Li; Ilya Dumer; Leonid P. Pryadko
2010-03-08T23:59:59.000Z
Codeword stabilized (CWS) codes are a general class of quantum codes that includes stabilizer codes and many families of non-additive codes with good parameters. For such a non-additive code correcting all t-qubit errors, we propose an algorithm that employs a single measurement to test all errors located on a given set of t qubits. Compared with exhaustive error screening, this reduces the total number of measurements required for error recovery by a factor of about 3^t.
Dynamic Prediction of Concurrency Errors
Sadowski, Caitlin
2012-01-01T23:59:59.000Z
Relation 15 Must-Before Race Prediction 16 Implementation 17viii Abstract Dynamic Prediction of Concurrency Errors bySANTA CRUZ DYNAMIC PREDICTION OF CONCURRENCY ERRORS A
Broader source: Energy.gov [DOE]
EnergyFit Nevada is a home energy retrofit program. The program assists homeowners in finding and contacting an energy assessment professional to perform an energy assessment and a certified...
Internship Contract (Includes Practicum)
Thaxton, Christopher S.
Internship Contract (Includes Practicum) Student's name-mail: _________________________________________ Internship Agency Contact Agency Name: ____________________________________ Address-mail: __________________________________________ Location of Internship, if different from Agency: ________________________________________________ Copies
Approaches to Quantum Error Correction
Julia Kempe
2006-12-21T23:59:59.000Z
The purpose of this little survey is to give a simple description of the main approaches to quantum error correction and quantum fault-tolerance. Our goal is to convey the necessary intuitions both for the problems and their solutions in this area. After characterising quantum errors we present several error-correction schemes and outline the elements of a full fledged fault-tolerant computation, which works error-free even though all of its components can be faulty. We also mention alternative approaches to error-correction, so called error-avoiding or decoherence-free schemes. Technical details and generalisations are kept to a minimum.
Geothermal FIT Design: International Experience and U.S. Considerations
Rickerson, W.; Gifford, J.; Grace, R.; Cory, K.
2012-08-01T23:59:59.000Z
Developing power plants is a risky endeavor, whether conventional or renewable generation. Feed-in tariff (FIT) policies can be designed to address some of these risks, and their design can be tailored to geothermal electric plant development. Geothermal projects face risks similar to other generation project development, including finding buyers for power, ensuring adequate transmission capacity, competing to supply electricity and/or renewable energy certificates (RECs), securing reliable revenue streams, navigating the legal issues related to project development, and reacting to changes in existing regulations or incentives. Although FITs have not been created specifically for geothermal in the United States to date, a variety of FIT design options could reduce geothermal power plant development risks and are explored. This analysis focuses on the design of FIT incentive policies for geothermal electric projects and how FITs can be used to reduce risks (excluding drilling unproductive exploratory wells).
Pump apparatus including deconsolidator
Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew
2014-10-07T23:59:59.000Z
A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL
Rudnyi, Evgenii B.
to apply. The algorithm to maximize a likelihood function in the case of a non-linear physico - the same variances of errors 3.1. One-way classification 3.2. Linear regression 4. Real case (vaporizationSTATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL E.B. Rudnyi Department of Chemistry
Unequal Error Protection Turbo Codes
Henkel, Werner
Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 #12;Unequal Error Protection Turbo Convolutional Codes / Turbo Codes 18 3.1 Structure
Living Expenses (includes approximately
Maroncelli, Mark
& engineering programs All other programs Graduate: MBA/INFSY at Erie & Harrisburg (12 credits) Business Guarantee 3 (Does not include Dependents Costs4 ) Altoona, Berks, Erie, and Harrisburg 12-Month Estimated
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem NotEnergy,ARMFormsGasRelease Date:research community -- hosted byCold Fusion Error
Optimal error estimates for corrected trapezoidal rules
Talvila, Erik
2012-01-01T23:59:59.000Z
Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
Franklin Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit...
Evaluation of respirator fit training by quantitative fit testing
Chute, Daniel Otis
1981-01-01T23:59:59.000Z
Instrument to Obtain Informed Consent APPENDIX B Health Screening Form V11 v111 ix 3 4 5 6 8 12 14 15 17 17 30 33 50 56 61 62 TA8LE OF CONTENTS (continued) APPENDIX C Resp1 rator Quantitative Fit Test Record APPENDIX D Resp1rator Tra... Test Session Grou Subject Ex erimental log PF Control Combined lo PF 1 2 3 5 6 7 8 9 10 11 12 13 14 15 2. 016 2. 715 2. 789 1. 468 2. 168 3. 783 2. 836 3. 129 2. 378 2. 685 2. 328 1. 925 2. 132 1. 740 2. 446 . 36. 538...
Nested Quantum Error Correction Codes
Zhuo Wang; Kai Sun; Hen Fan; Vlatko Vedral
2009-09-28T23:59:59.000Z
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Learning from FITS: Limitations in use in modern astronomical research
Thomas, Brian; Economou, Frossie; Greenfield, Perry; Hirst, Paul; Berry, David S; Bray, Erik; Gray, Norman; Muna, Demitri; Turner, James; de Val-Borro, Miguel; Santander-Vela, Juande; Shupe, David; Good, John; Berriman, G Bruce; Kitaeff, Slava; Fay, Jonathan; Laurino, Omar; Alexov, Anastasia; Landry, Walter; Masters, Joe; Brazier, Adam; Schaaf, Reinhold; Edwards, Kevin; Redman, Russell O; Marsh, Thomas R; Streicher, Ole; Norris, Pat; Pascual, Sergio; Davie, Matthew; Droettboom, Michael; Robitaille, Thomas; Campana, Riccardo; Hagen, Alex; Hartogh, Paul; Klaes, Dominik; Craiga, Matthew W; Homeier, Derek
2015-01-01T23:59:59.000Z
The Flexible Image Transport System (FITS) standard has been a great boon to astronomy, allowing observatories, scientists and the public to exchange astronomical information easily. The FITS standard, however, is showing its age. Developed in the late 1970s, the FITS authors made a number of implementation choices that, while common at the time, are now seen to limit its utility with modern data. The authors of the FITS standard could not anticipate the challenges which we are facing today in astronomical computing. Difficulties we now face include, but are not limited to, addressing the need to handle an expanded range of specialized data product types (data models), being more conducive to the networked exchange and storage of data, handling very large datasets, and capturing significantly more complex metadata and data relationships. There are members of the community today who find some or all of these limitations unworkable, and have decided to move ahead with storing data in other formats. If this frag...
Finding beam focus errors automatically
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.
1987-01-01T23:59:59.000Z
An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors. (LEW)
Data& Error Analysis 1 DATA and ERROR ANALYSIS
Mukasyan, Alexander
Data& Error Analysis 1 DATA and ERROR ANALYSIS Performing the experiment and collecting data learned, you might get a better grade.) Data analysis should NOT be delayed until all of the data. This will help one avoid the problem of spending an entire class collecting bad data because of a mistake
Fitting orbits to tidal streams
James Binney
2008-02-11T23:59:59.000Z
Recent years have seen the discovery of many tidal streams through the Galaxy. Relatively straightforward observations of a stream allow one to deduce three phase-space coordinates of an orbit. An algorithm is presented that reconstructs the missing phase-space coordinates from these data. The reconstruction starts from assumed values of the Galactic potential and a distance to one point on the orbit, but with noise-free data the condition that energy be conserved on the orbit enables one to reject incorrect assumptions. The performance of the algorithm is investigated when errors are added to the input data that are comparable to those in published data for the streams of Pal 5. It is found that the algorithm returns distances and proper motions that are accurate to of order one percent, and enables one to reject quite reasonable but incorrect trial potentials. In practical applications it will be important to minimize errors in the input data, and there is considerable scope for doing this.
People Strategy Fit for Our Future
People Strategy Fit for Our Future People Strategy 2011-2016 #12;#12;Fit for Our Future Tim. The implications of the Comprehensive Spending Review settlements in each country will mean big changes for many of our people. Fit for Our Future: People Strategy 2011-2016 | 1 The Executive Board and the rest of my
Identification of toroidal field errors in a modified betatron accelerator
Loschialpo, P. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States)); Marsh, S.J. (SFA Inc., Landover, Maryland 20785 (United States)); Len, L.K.; Smith, T. (FM Technologies Inc., 10529-B Braddock Road, Fairfax, Virginia 22032 (United States)); Kapetanakos, C.A. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States))
1993-06-01T23:59:59.000Z
A newly developed probe, having a 0.05% resolution, has been used to detect errors in the toroidal magnetic field of the NRL modified betatron accelerator. Measurements indicate that the radial field components (errors) are 0.1%--1% of the applied toroidal field. Such errors, in the typically 5 kG toroidal field, can excite resonances which drive the beam to the wall. Two sources of detected field errors are discussed. The first is due to the discrete nature of the 12 single turn coils which generate the toroidal field. Both measurements and computer calculations indicate that its amplitude varies from 0% to 0.2% as a function of radius. Displacement of the outer leg of one of the toroidal field coils by a few millimeters has a significant effect on the amplitude of this field error. Because of uniform toroidal periodicity of these coils this error is a good suspect for causing the excitation of the damaging [ital l]=12 resonance seen in our experiments. The other source of field error is due to the current feed gaps in the vertical magnetic field coils. A magnetic field is induced inside the vertical field coils' conductor in the opposite direction of the applied toroidal field. Fringe fields at the gaps lead to additional field errors which have been measured as large as 1.0%. This source of field error, which exists at five toroidal locations around the modified betatron, can excite several integer resonances, including the [ital l]=12 mode.
Static Detection of Disassembly Errors
Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K.
2009-10-13T23:59:59.000Z
Static disassembly is a crucial ?rst step in reverse engineering executable ?les, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.
Dynamic Prediction of Concurrency Errors
Sadowski, Caitlin
2012-01-01T23:59:59.000Z
errors in systems code using smt solvers. In Computer Aideddata race witnesses by an SMT-based analysis. In NASA Formalscalability relies on a modern SMT solver and an e?cient
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01T23:59:59.000Z
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Robust mixtures in the presence of measurement errors
Jianyong Sun; Ata Kaban; Somak Raychaudhury
2007-09-06T23:59:59.000Z
We develop a mixture-based approach to robust density modeling and outlier detection for experimental multivariate data that includes measurement error information. Our model is designed to infer atypical measurements that are not due to errors, aiming to retrieve potentially interesting peculiar objects. Since exact inference is not possible in this model, we develop a tree-structured variational EM solution. This compares favorably against a fully factorial approximation scheme, approaching the accuracy of a Markov-Chain-EM, while maintaining computational simplicity. We demonstrate the benefits of including measurement errors in the model, in terms of improved outlier detection rates in varying measurement uncertainty conditions. We then use this approach in detecting peculiar quasars from an astrophysical survey, given photometric measurements with errors.
Two-Layer Error Control Codes Combining Rectangular and Hamming Product Codes for Cache Error
Zhang, Meilin
We propose a novel two-layer error control code, combining error detection capability of rectangular codes and error correction capability of Hamming product codes in an efficient way, in order to increase cache error ...
Systematic errors in current quantum state tomography tools
Christian Schwemmer; Lukas Knips; Daniel Richart; Tobias Moroder; Matthias Kleinmann; Otfried Gühne; Harald Weinfurter
2014-07-22T23:59:59.000Z
Common tools for obtaining physical density matrices in experimental quantum state tomography are shown here to cause systematic errors. For example, using maximum likelihood or least squares optimization for state reconstruction, we observe a systematic underestimation of the fidelity and an overestimation of entanglement. A solution for this problem can be achieved by a linear evaluation of the data yielding reliable and computational simple bounds including error bars.
Harmonic Analysis Errors in Calculating Dipole,
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to reduce the harmonic field calculation errors. A conformal transfor- mation of a multipole magnet into a dipole reduces these errors. Dipole Magnet Calculations A triangular...
Formalism for Simulation-based Optimization of Measurement Errors in High Energy Physics
Yuehong Xie
2009-04-29T23:59:59.000Z
Miminizing errors of the physical parameters of interest should be the ultimate goal of any event selection optimization in high energy physics data analysis involving parameter determination. Quick and reliable error estimation is a crucial ingredient for realizing this goal. In this paper we derive a formalism for direct evaluation of measurement errors using the signal probability density function and large fully simulated signal and background samples without need for data fitting and background modelling. We illustrate the elegance of the formalism in the case of event selection optimization for CP violation measurement in B decays. The implication of this formalism on choosing event variables for data analysis is discussed.
Residual stresses and stress corrosion cracking in pipe fittings
Parrington, R.J.; Scott, J.J.; Torres, F.
1994-06-01T23:59:59.000Z
Residual stresses can play a key role in the SCC performance of susceptible materials in PWR primary water applications. Residual stresses are stresses stored within the metal that develop during deformation and persist in the absence of external forces or temperature gradients. Sources of residual stresses in pipe fittings include fabrication processes, installation and welding. There are a number of methods to characterize the magnitude and orientation of residual stresses. These include numerical analysis, chemical cracking tests, and measurement (e.g., X-ray diffraction, neutron diffraction, strain gage/hole drilling, strain gage/trepanning, strain gage/section and layer removal, and acoustics). This paper presents 400 C steam SCC test results demonstrating that residual stresses in as-fabricated Alloy 600 pipe fittings are sufficient to induce SCC. Residual stresses present in as-fabricated pipe fittings are characterized by chemical cracking tests (stainless steel fittings tested in boiling magnesium chloride solution) and by the sectioning and layer removal (SLR) technique.
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)
2012-07-10T23:59:59.000Z
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
Distributed Error Confinement Extended Abstract
Patt-Shamir, Boaz
. These algorithms can serve as building blocks in more general reactive systems. Previous results in exploring locality in reactive systems were not error confined, and relied on the assump- tion (not used in current, that seems inherent for voting in reactive networks; its analysis leads to an interesting combinatorial
Install Removable Insulation on Valves and Fittings
Not Available
2006-01-01T23:59:59.000Z
This revised ITP tip sheet on installing removable insulation on valves and fittings provides how-to advice for improving the system using low-cost, proven practices and technologies.
Developing the next "wow" fitness product
Renjifo, Jorge F. (Renjifo-Mundo)
2007-01-01T23:59:59.000Z
The fitness industry has not seen a commercially successful revolution since the elliptical trainer in the mid 1990s. Newer products such as the Cybex Arc Trainer are vying to replicate this success, but are only slowly ...
Power of Alternative Fit Indices for Multiple Group Longitudinal Tests of Measurement Invariance
Short, Stephen David
2014-05-31T23:59:59.000Z
a Monte Carlo simulation to examine the power of change in alternative fit indices to detect two types of measurement invariance, weak and strong, across a variety of manipulated study conditions including sample size, sample size ratio, lack...
Acquaviva, Viviana; Gawiser, Eric
2015-01-01T23:59:59.000Z
We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties on the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the mu...
Energy efficiency of error correction for wireless communication
Havinga, Paul J.M.
-control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost
Shrink fit effects on rotordynamic stability: experimental and theoretical study
Jafri, Syed Muhammad Mohsin
2007-09-17T23:59:59.000Z
, which acts as the interference fit joint. The unstable sub-synchronous vibrations originate from slippage in the shrink fit and the interference fit interfaces that develop friction forces, which act as destabilizing cross-coupled moments when the rotor...
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)
1993-01-01T23:59:59.000Z
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Biomass Resources Overview and Perspectives on Best Fits for...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Biomass Resources Overview and Perspectives on Best Fits for Fuel Cells Biomass Resources Overview and Perspectives on Best Fits for Fuel Cells Biomass resources overview and...
ability physical fitness: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Physics University of Cambridge - Dspace Summary: MCMC Fits to LVS Allanach, Dolan, Weber The Standard Model and Beyond From The Standard Model To String Theory Global Fitting...
army physical fitness: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Physics University of Cambridge - Dspace Summary: MCMC Fits to LVS Allanach, Dolan, Weber The Standard Model and Beyond From The Standard Model To String Theory Global Fitting...
A Fitting Robot for Variational Analysis
Alan Ó Cais; Derek Leinweber; Selim Mahbub; Tony Williams
2008-12-18T23:59:59.000Z
We develop a robot algorithm to maximise the number of distinct states reliably extracted from correlator data using the variational analysis method. The robot explores the variational parameter space and attempts to remove, as far as possible, the human element from the fitting of the subsequent orthogonalised data.
Optimal data fitting: a moment approach
2007-01-06T23:59:59.000Z
Data fitting problems have long been very useful in many different application areas. A well-known .... natural to ask how good this moment relaxation could be as compared to the original problem and ... In this section, let us assume that fixed.
Neural network approach to parton distributions fitting
Andrea Piccione; Joan Rojo; for the NNPDF Collaboration
2005-10-18T23:59:59.000Z
We will show an application of neural networks to extract information on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions, and on the nonsinglet parton distribution will be shown.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-01-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst's understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-05-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst`s understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Cosmographic Hubble fits to the supernova data
Celine Cattoen; Matt Visser
2008-09-03T23:59:59.000Z
The Hubble relation between distance and redshift is a purely cosmographic relation that depends only on the symmetries of a FLRW spacetime, but does not intrinsically make any dynamical assumptions. This suggests that it should be possible to estimate the parameters defining the Hubble relation without making any dynamical assumptions. To test this idea, we perform a number of inter-related cosmographic fits to the legacy05 and gold06 supernova datasets. Based on this supernova data, the "preponderance of evidence" certainly suggests an accelerating universe. However we would argue that (unless one uses additional dynamical and observational information) this conclusion is not currently supported "beyond reasonable doubt". As part of the analysis we develop two particularly transparent graphical representations of the redshift-distance relation -- representations in which acceleration versus deceleration reduces to the question of whether the relevant graph slopes up or down. Turning to the details of the cosmographic fits, three issues in particular concern us: First, the fitted value for the deceleration parameter changes significantly depending on whether one performs a chi^2 fit to the luminosity distance, proper motion distance or other suitable distance surrogate. Second, the fitted value for the deceleration parameter changes significantly depending on whether one uses the traditional redshift variable z, or what we shall argue is on theoretical grounds an improved parameterization y=z/(1+z). Third, the published estimates for systematic uncertainties are sufficiently large that they certainly impact on, and to a large extent undermine, the usual purely statistical tests of significance. We conclude that the supernova data should be treated with some caution.
Countries Gasoline Prices Including Taxes
Gasoline and Diesel Fuel Update (EIA)
Selected Countries (U.S. dollars per gallon, including taxes) Date Belgium France Germany Italy Netherlands UK US 51115 6.15 6.08 6.28 6.83 6.96 6.75 3.06 5415 6.14 6.06...
Sponsorship includes: Agriculture in the
Nebraska-Lincoln, University of
Sponsorship includes: Â· Agriculture in the Classroom Â· Douglas County Farm Bureau Â· Gifford Farm Â· University of Nebraska Agricultural Research and Development Center Â· University of Nebraska- Lincoln Awareness Coalition is to help youth, primarily from urban communities, become aware of agriculture
Degeneracy and Discreteness in Cosmological Model Fitting
Teng, Huan-Yu; Hu, Huan-Chen; Zhang, Tong-Jie
2015-01-01T23:59:59.000Z
We explore the degeneracy and discreteness problems in the standard cosmological model ({\\Lambda}CDM). We use the Observational Hubble Data (OHD) and the type Ia supernova (SNe Ia) data to study this issue. In order to describe the discreteness in fitting of data, we define a factor G to test the influence from each single data point and analyze the goodness of G. Our results indicate that a higher absolute value of G shows a better capability of distinguishing models, which means the parameters are restricted into smaller confidence intervals with a larger figure of merit evaluation. Consequently, we claim that the factor G is an effective way in model differentiation when using different models to fit the observational data.
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01T23:59:59.000Z
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Statistical Error analysis of Nucleon-Nucleon phenomenological potentials
R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola
2014-06-10T23:59:59.000Z
Nucleon-Nucleon potentials are commonplace in nuclear physics and are determined from a finite number of experimental data with limited precision sampling the scattering process. We study the statistical assumptions implicit in the standard least squares fitting procedure and apply, along with more conventional tests, a tail sensitive quantile-quantile test as a simple and confident tool to verify the normality of residuals. We show that the fulfilment of normality tests is linked to a judicious and consistent selection of a nucleon-nucleon database. These considerations prove crucial to a proper statistical error analysis and uncertainty propagation. We illustrate these issues by analyzing about 8000 proton-proton and neutron-proton scattering published data. This enables the construction of potentials meeting all statistical requirements necessary for statistical uncertainty estimates in nuclear structure calculations.
Automated ligand fitting by core-fragment fitting and extension into density
Terwilliger, Thomas C., E-mail: terwilliger@lanl.gov [Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545 (United States); Klei, Herbert [Bristol-Myers Squibb Pharmaceutical Research Institute, PO Box 4000, Princeton, New Jersey 08543-4000 (United States); Adams, Paul D.; Moriarty, Nigel W. [Lawrence Berkeley National Laboratory, One Cyclotron Road, BLDG 64R0121, Berkeley, CA 94720 (United States); Cohn, Judith D. [Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545 (United States)
2006-08-01T23:59:59.000Z
An automated ligand-fitting procedure has been developed and tested on 9327 ligands and (F{sub o} ? F{sub c})exp(i?{sub c}) difference density from macromolecular structures in the Protein Data Bank. A procedure for fitting of ligands to electron-density maps by first fitting a core fragment of the ligand to density and then extending the remainder of the ligand into density is presented. The approach was tested by fitting 9327 ligands over a wide range of resolutions (most are in the range 0.8-4.8 Å) from the Protein Data Bank (PDB) into (F{sub o} ? F{sub c})exp(i?{sub c}) difference density calculated using entries from the PDB without these ligands. The procedure was able to place 58% of these 9327 ligands within 2 Å (r.m.s.d.) of the coordinates of the atoms in the original PDB entry for that ligand. The success of the fitting procedure was relatively insensitive to the size of the ligand in the range 10–100 non-H atoms and was only moderately sensitive to resolution, with the percentage of ligands placed near the coordinates of the original PDB entry for fits in the range 58–73% over all resolution ranges tested.
Estimating IMU heading error from SAR images.
Doerry, Armin Walter
2009-03-01T23:59:59.000Z
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
Flux recovery and a posteriori error estimators
2010-05-20T23:59:59.000Z
bility and the local efficiency bounds for this estimator are established provided that the ... For simple model problems, the energy norm of the true error is equal.
Original Article Error Bounds and Metric Subregularity
2014-06-18T23:59:59.000Z
theory of error bounds of extended real-valued functions. Another objective is to ... Another observation is that neighbourhood V in the original definition of metric.
Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)
Hodge, B. M.; Milligan, M.
2011-07-01T23:59:59.000Z
This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.
On the evaluation of human error probabilities for post-initiating events
Presley, Mary R
2006-01-01T23:59:59.000Z
Quantification of human error probabilities (HEPs) for the purpose of human reliability assessment (HRA) is very complex. Because of this complexity, the state of the art includes a variety of HRA models, each with its own ...
Probabilistic growth of large entangled states with low error accumulation
Yuichiro Matsuzaki; Simon C Benjamin; Joseph Fitzsimons
2009-08-03T23:59:59.000Z
The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.
Error Mining on Dependency Trees Claire Gardent
Paris-Sud XI, Université de
Error Mining on Dependency Trees Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-l`es-Nancy, F-54600, France shashi.narayan@loria.fr Abstract In recent years, error mining approaches were propose an algorithm for mining trees and ap- ply it to detect the most likely sources of gen- eration
SEU induced errors observed in microprocessor systems
Asenek, V.; Underwood, C.; Oldfield, M. [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre] [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre; Velazco, R.; Rezgui, S.; Cheynet, P. [TIMA Lab., Grenoble (France)] [TIMA Lab., Grenoble (France); Ecoffet, R. [Centre National d`Etudes Spatiales, Toulouse (France)] [Centre National d`Etudes Spatiales, Toulouse (France)
1998-12-01T23:59:59.000Z
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Remarks on statistical errors in equivalent widths
Klaus Vollmann; Thomas Eversberg
2006-07-03T23:59:59.000Z
Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.
Stabilizer Formalism for Operator Quantum Error Correction
David Poulin
2006-06-14T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 4 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
The FIT Model - Fuel-cycle Integration and Tradeoffs
Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros
2010-09-01T23:59:59.000Z
All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.
Automated ligand fitting by core-fragment fitting and extensioninto density
Terwilliger, Thomas C.; Klei, Herbert; Adams, Paul D.; Moriarty,Nigel W.; Cohn, Judith D.
2006-08-01T23:59:59.000Z
A procedure for fitting of ligands to electron- density mapsby first fitting a core fragment of the ligand to density and thenextending the remainder of the ligand into density is presented. Theapproach was tested by fitting 9327 ligands over a wide range ofresolutions ( most are in the range 0.8-4.8 angstrom) from the ProteinData Bank (PDB) into (F-o - F-c) exp(i phi(c)) difference densitycalculated using entries from the PDB without these ligands. Theprocedure was able to place 58 percent of these 9327 ligands within 2angstrom (r.m. s.d.) of the coordinates of the atoms in the original PDBentry for that ligand. The success of the fitting procedure wasrelatively insensitive to the size of the ligand in the range 10 -100non-H atoms and was only moderately sensitive to resolution, with thepercentage of ligands placed near the coordinates of the original PDBentry for fits in the range 58 - 73 percent over all resolution rangestested.
Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES
Zacks, Jeffrey M.
Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES A computational model of event segmentation from perceptual prediction. Jeremy R. Reynolds, Jeffrey M. Zacks, and Todd S. Braver Washington University Manuscript #12;Prediction Error and Event Boundaries 2 People tend
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01T23:59:59.000Z
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
Extended Fractal Fits to Riemann Zeros
Paul B. Slater
2007-05-21T23:59:59.000Z
We extend to the first 300 Riemann zeros, the form of analysis reported by us in arXiv:math-ph/0606005, in which the largest study had involved the first 75 zeros. Again, we model the nonsmooth fluctuating part of the Wu-Sprung potential, which reproduces the Riemann zeros, by the alternating-sign sine series fractal of Berry and Lewis A(x,g). Setting the fractal dimension equal to 3/2. we estimate the frequency parameter (g), plus an overall scaling parameter (s) introduced. We search for that pair of parameters (g,s) which minimizes the least-squares fit of the lowest 300 eigenvalues -- obtained by solving the one-dimensional stationary (non-fractal) Schrodinger equation with the trial potential (smooth plus nonsmooth parts) -- to the first 300 Riemann zeros. We randomly sample values within the rectangle 0 fractal supplementation. Some limited improvement is again found. There are two (primary and secondary) quite distinct subdomains, in which the values giving improvements in fit are concentrated.
Rapid world modeling: Fitting range data to geometric primitives
Feddema, J.; Little, C.
1996-12-31T23:59:59.000Z
For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE`s waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data.
Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling
Louisiana State University; Balman, Mehmet; Kosar, Tevfik
2010-10-27T23:59:59.000Z
Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.
Quantum error-correcting codes and devices
Gottesman, Daniel (Los Alamos, NM)
2000-10-03T23:59:59.000Z
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Organizational Errors: Directions for Future Research
Carroll, John Stephen
The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...
Quantum Error Correction for Quantum Memories
Barbara M. Terhal
2015-01-20T23:59:59.000Z
Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.
On the Fourier Transform Approach to Quantum Error Control
Hari Dilip Kumar
2012-08-24T23:59:59.000Z
Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian
2012-05-01T23:59:59.000Z
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
The Error-Pattern-Correcting Turbo Equalizer
Alhussien, Hakim
2010-01-01T23:59:59.000Z
The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.
Computation of the Fourier parameters of RR Lyrae stars by template fitting
G. Kovacs; G. Kupi
2006-10-27T23:59:59.000Z
Due to the importance of accurate Fourier parameters, we devise a method that is more appropriate for deriving these parameters on low-quality data than the traditional Fourier fitting. Based on the accurate light curves of 248 fundamental mode RR Lyrae stars, we test the power of a full-fetched implementation of the template method in the computation of the Fourier decomposition. The applicability of the method is demonstrated also on datasets of filter passbands different from that of the template set. We examine in more detail the question of the estimation of Fourier- based iron abundance [Fe/H] and average brightness. We get, for example, for light curves sampled randomly in 30 data points with sigma=0.03 mag observational noise that optimized direct Fourier fits yield sigma_[Fe/H]=0.33, whereas the template fits result in sigma_[Fe/H]=0.18. Tests made on the RR Lyrae database of the Large Magellanic Cloud (LMC) of the Optical Gravitational Lensing Experiment (OGLE) support the applicability of the method on real photometric time series. These tests also show that the dominant part of error in estimating the average brightness comes from other sources, most probably from crowding effects, even for under-sampled light curves.
A systems approach to reducing utility billing errors
Ogura, Nori
2013-01-01T23:59:59.000Z
Many methods for analyzing the possibility of errors are practiced by organizations who are concerned about safety and error prevention. However, in situations where the error occurrence is random and difficult to track, ...
Error Detection and Recovery for Robot Motion Planning with Uncertainty
Donald, Bruce Randall
1987-07-01T23:59:59.000Z
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has ...
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01T23:59:59.000Z
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Lee, Khee-Gan; Spergel, David N. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Suzuki, Nao, E-mail: lee@astro.princeton.edu [E.O. Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States)
2012-02-15T23:59:59.000Z
Continuum fitting is an important aspect of Ly{alpha} forest science, since errors in the derived optical depths scale with the fractional continuum error. However, traditional methods of estimating continua in noisy and moderate-resolution spectra (e.g., Sloan Digital Sky Survey, SDSS; S/N {approx}< 10 pixel{sup -1} and R {approx} 2000), such as power-law extrapolation or dividing by the mean spectrum, achieve no better than {approx}15% rms accuracy. To improve on this, we introduce mean-flux-regulated principal component analysis (MF-PCA) continuum fitting. In this technique, PCA fitting is carried out redward of the quasar Ly{alpha} line in order to provide a prediction for the shape of the Ly{alpha} forest continuum. The slope and amplitude of this continuum prediction is then corrected using external constraints for the Ly{alpha} forest mean flux. This requires prior knowledge of the mean flux, (F), but significantly improves the accuracy of the flux transmission, F {identical_to} exp (- {tau}), estimated from each pixel. From tests on mock spectra, we find that MF-PCA reduces the errors to 8% rms in S/N {approx} 2 spectra, and <5% rms in spectra with S/N {approx}> 5. The residual Fourier power in the continuum is decreased by a factor of a few in comparison with dividing by the mean continuum, enabling Ly{alpha} flux power spectrum measurements to be extended to {approx}2 Multiplication-Sign larger scales. Using this new technique, we make available continuum fits for 12,069 z > 2.3 Ly{alpha} forest spectra from SDSS Data Release 7 for use by the community. This technique is also applicable to future releases of the ongoing Baryon Oscillations Spectroscopic Survey, which obtains spectra for {approx}150, 000 Ly{alpha} forest spectra at low signal-to-noise (S/N {approx} 2).
Gershgorin, B. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States); Harlim, J. [Department of Mathematics, North Carolina State University, NC 27695 (United States)], E-mail: jharlim@ncsu.edu; Majda, A.J. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States)
2010-01-01T23:59:59.000Z
The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates high predictive skill, comparable with the skill of the perfect model for a duration of many eddy turnover times especially in the unstable regime.
Running jobs error: "inet_arp_address_lookup"
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
jobs error: "inetarpaddresslookup" Resolved: Running jobs error: "inetarpaddresslookup" September 22, 2013 by Helen He (0 Comments) Symptom: After the Hopper August 14...
Global Error bounds for systems of convex polynomials over ...
2011-11-11T23:59:59.000Z
This paper is devoted to study the Lipschitzian/Holderian type global error ...... set is not neccessarily compact, we obtain the Hölder global error bound result.
Estimating the error in simulation prediction over the design space
Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)
2003-01-01T23:59:59.000Z
This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.
Integrating human related errors with technical errors to determine causes behind offshore accidents
Aamodt, Agnar
Integrating human related errors with technical errors to determine causes behind offshore of offshore accidents there is a continuous focus on safety improvements. An improved evaluation method concepts in the model are structured in hierarchical categories, based on well-established knowledge
Mather, Mara
Running head: STEREOTYPE THREAT REDUCES MEMORY ERRORS Stereotype threat can reduce older adults, 90089-0191. Phone: 213-740-6772. Email: barbersa@usc.edu #12;STEREOTYPE THREAT REDUCES MEMORY ERRORS 2 Abstract (144 words) Stereotype threat often incurs the cost of reducing the amount of information
Uncertainty and error in computational simulations
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01T23:59:59.000Z
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Design error diagnosis and correction in digital circuits
Nayak, Debashis
1998-01-01T23:59:59.000Z
, each primary output would impose a con- straint on the on-set and off-set. These constraints should be combined together to derive the final on-set and off-set of the new function. Proposition 2: [9, 18, 17] Let i be the index of the primary outputs... to this equation are deleted. The work in [17] is also based on Boolean comparisons and applies to multiple errors. Overall, their method does not guarantee a solution. Test-vector simulation methods proposed for the DEDC problem include [20, 22, 26]. In [20...
Laser Phase Errors in Seeded FELs
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-03-28T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
On the Error in QR Integration
Dieci, Luca; Van Vleck, Erik
2008-03-07T23:59:59.000Z
] . . . [R(t2, t1) +E2][R(t1, t0) +E1]R(t0) , k = 1, 2, . . . , where Q(tk) is the exact Q-factor at tk and the triangular transitions R(tj , tj?1) are also the exact ones. Moreover, the factors Ej , j = 1, . . . , k, are bounded in norm by the local error... committed during integration of the relevant differential equations; see Theorems 3.1 and 3.16.” We will henceforth simply write (2.7) ?Ej? ? ?, j = 1, 2, . . . , and stress that ? is computable, in fact controllable, in terms of local error tolerances...
Fitting the Galaxy Rotation Curves: Strings versus NFW profile
Yeuk-Kwan E. Cheung; Feng Xu
2008-10-14T23:59:59.000Z
Remarkable fit of galaxy rotation curves is achieved using a simple model from string theory. The rotation curves of the same group of galaxies are also fit using dark matter model with the generalized Navarro-Frenk-White profile for comparison. String model utilizes three free parameters vs five in the dark matter model. The average chi-squared of the string model fit is 1.649 while that of the dark matter model is 1.513. The generalized NFW profile fits marginally better at a price of two more free parameters.
A self-checking fiber optic dosimeter for monitoring common errors in brachytherapy applications
Yin, Y.; Lambert, J.; Yang, S.; McKenzie, D. R.; Jackson, M.; Suchowerska, N. [Physics School, University of Sydney, New South Wales 2006 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia) and Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia); Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia); Physics School, University of Sydney, New South Wales 2006 (Australia) and Department of Radiation Oncology, Royal Prince Alfred Hospital, New South Wales 2050 (Australia)
2009-07-15T23:59:59.000Z
Scintillation dosimetry with optical fiber readout [fiber optic dosimetry (FOD)] requires accurate measurement of light intensity. It is therefore vulnerable to loss of calibration if any changes occur in the efficiency of the optical pathway between the scintillator and the light detector. The authors show in this article that common types of errors that arise during clinical use for brachytherapy applications can be quantified using a light emitting diode to stimulate the scintillator, the so-called LED-FOD method, in an integrated and easy-to-use control unit that incorporates a compact peripheral component interconnect extension for instrumentation. Common sources of error include bending and mechanical compression of the fiber optic components and changes in the temperature of the scintillator. The authors show that the method can detect all the common errors studied in this work and that different types of errors can result in different correlations between the LED stimulated signal and the brachytherapy source signal. For a single-type error the LED-FOD can be used easily for system diagnosis and validation with the possibility to correct the dosimeter reading if the correlation between the LED stimulated signal and the brachytherapy source signal can be defined. For more complex errors, resulting from two or more errors occurring simultaneously, the LED-FOD method can also allow the clinician to make a judgment on the reliability of the dosimeter reading. This self-checking method can enhance the clinical robustness of the FOD for achieving accurate dose control.
Quantification of model mismatch errors of the dynamic energy distribution in a stirred-tank reactor
Kimmich, Mark Raymond
1987-01-01T23:59:59.000Z
experiments Moo- Young and Chan (1971) proposed a model which consisted of a dual series of well-mixed regions and dead space in series with a plug- flow region. The system studied consisted of a viscous fluid flowing in a cylindrical tank fitted with four...QUANTIFICATION OF MODEL MISMATCH ERRORS OF THE DYNAMIC ENERGY DISTRIBUTION IN A STIRRED- TANK REACTOR A Thesis by MARK RAYMOND KIMMICH Submitted to the Graduate College of Texas AkM University in partial fulfillment of the requirement...
Error analysis of nuclear forces and effective interactions
R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola
2014-09-04T23:59:59.000Z
The Nucleon-Nucleon interaction is the starting point for ab initio Nuclear Structure and Nuclear reactions calculations. Those are effectively carried out via effective interactions fitting scattering data up to a maximal center of mass momentum. However, NN interactions are subjected to statistical and systematic uncertainties which are expected to propagate and have some impact on the predictive power and accuracy of theoretical calculations, regardless on the numerical accuracy of the method used to solve the many body problem. We stress the necessary conditions required for a correct and self-consistent statistical interpretation of the discrepancies between theory and experiment which enable a subsequent statistical error propagation and correlation analysis. We comprehensively discuss an stringent and recently proposed tail-sensitive normality test and provide a simple recipe to implement it. As an application, we analyze the deduced uncertainties and correlations of effective interactions in terms of Moshinsky-Skyrme parameters and effective field theory counterterms as derived from the bare NN potential containing One-Pion-Exchange and Chiral Two-Pion-Exchange interactions inferred from scattering data.
High Performance Dense Linear System Solver with Soft Error Resilience
Dongarra, Jack
High Performance Dense Linear System Solver with Soft Error Resilience Peng Du, Piotr Luszczek systems, and in some scientific applications C/R is not applicable for soft error at all due to error) high performance dense linear system solver with soft error resilience. By adopting a mathematical
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01T23:59:59.000Z
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
Verifying Volume Rendering Using Discretization Error Analysis
Kirby, Mike
Verifying Volume Rendering Using Discretization Error Analysis Tiago Etiene, Daniel Jo¨nsson, Timo--We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization
MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS
Heinemann, Detlev
commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head
Hierarchical Classification of Documents with Error Control
King, Kuo Chin Irwin
Hierarchical Classification of Documents with Error Control Chun-hung Cheng1 , Jian Tang2 , Ada Wai is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents). The traditional method
Hierarchical Classification of Documents with Error Control
Fu, Ada Waichee
Hierarchical Classification of Documents with Error Control Chunhung Cheng 1 , Jian Tang 2 , Ada. Classification is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents
Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness
Jong-Kyu Park, Michael J. Schaffer, Robert J. La Haye,Timothy J. Scoville and Jonathan E. Menard
2011-05-16T23:59:59.000Z
Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.
Pearson's Goodness of Fit Statistic as a Score Test Statistic
Smyth, Gordon K.
Pearson's Goodness of Fit Statistic as a Score Test Statistic Gordon K. Smyth Abstract For any generalized linear model, the Pearson goodness of fit statistic is the score test statistic for testing and the residual deviance is therefore the relationship between the score test and the likelihood ratio test
Metrics Are Fitness Functions Too Mark Harman John Clark
Singer, Jeremy
that there is an alternative, complementary, view of a metric: as a fitness function, used to guide a search for optimal' (MAFF) approach offers a number of additional benefits to metrics research and practice because systems. It describes the properties of a metric which make it a good fitness function and explains
Searching the Clinical Fitness Landscape Margaret J. Eppstein1
Eppstein, Margaret J.
Abstract Widespread unexplained variations in clinical practices and patient outcomes suggest major in expected patient outcomes than more traditional approaches in searching simulated clinical fitnessSearching the Clinical Fitness Landscape Margaret J. Eppstein1 *, Jeffrey D. Horbar2,4 , Jeffrey S
Effect of shrink fits on threshold speeds of rotordynamic instability
Al-Baz, Khalid A
2001-01-01T23:59:59.000Z
The purpose of this thesis is to study the effect of shrink fits on the threshold speeds of rotor instability. Shrink or press fit components in built-up rotors are known sources of internal friction damping. The internal friction damping increases...
Press fit design : force and torque testing of steel dowel pins in brass and nylon samples
Nelson, Alexandra T
2006-01-01T23:59:59.000Z
An experimental study was conducted to determine the accuracy of current press fit theory when applied to press fit design. Brass and nylon hex samples were press fitted with hardened steel dowel pins. Press fit force and ...
Specified pipe fittings susceptible to sulfide stress cracking
McIntyre, D.R.; Moore, E.M. Jr. [Saudi Aramco, Dhahran (Saudi Arabia)
1996-01-01T23:59:59.000Z
The NACE Standard MR0175 limit of HRC 22 is too high for cold-forged and stress-relieved ASTM A234 WPB pipe fittings. Hardness surveys and sulfide stress cracking test results per ASTM G 39 and NACE TM0177 Method B are presented to support this contention. More stringent inspection and a hardness limit of HB 197 (for cold-forged and stress-relieved fittings only) are recommended. The paper describes a case in which fittings were welded in place in wet sour service flow lines and gas-oil separating plants which were ready to start. The failure of a welded fitting shortly after start-up led to extensive field hardness testing on all fittings from this manufacturer.
Heat Pump Water Heaters and American Homes: A Good Fit?
Franco, Victor; Lekov, Alex; Meyers, Steve; Letschert, Virginie
2010-05-14T23:59:59.000Z
Heat pump water heaters (HPWHs) are over twice as energy-efficient as conventional electric resistance water heaters, with the potential to save substantial amounts of electricity. Drawing on analysis conducted for the U.S. Department of Energy's recently-concluded rulemaking on amended standards for water heaters, this paper evaluates key issues that will determine how well, and to what extent, this technology will fit in American homes. The key issues include: 1) equipment cost of HPWHs; 2) cooling of the indoor environment by HPWHs; 3) size and air flow requirements of HPWHs; 4) performance of HPWH under different climate conditions and varying hot water use patterns; and 5) operating cost savings under different electricity prices and hot water use. The paper presents the results of a life-cycle cost analysis of the adoption of HPWHs in a representative sample of American homes, as well as national impact analysis for different market share scenarios. Assuming equipment costs that would result from high production volume, the results show that HPWHs can be cost effective in all regions for most single family homes, especially when the water heater is not installed in a conditioned space. HPWHs are not cost effective for most manufactured home and multi-family installations, due to lower average hot water use and the water heater in the majority of cases being installed in conditioned space, where cooling of the indoor environment and size and air flow requirements of HPWHs increase installation costs.
Jack R. Gabel; Nahum Arav; Jelle S. Kaastra; Gerard A. Kriss; Ehud Behar; Elisa Costantini; C. Martin Gaskell; Kirk T. Korista; Ari Laor; Frits Paerels; Daniel Proga; Jessica Kim Quijano; Masao Sako; Jennifer E. Scott; Katrien C. Steenbrugge
2005-01-25T23:59:59.000Z
We present an analysis of the intrinsic UV absorption in the Seyfert 1 galaxy Mrk 279 based on simultaneous long observations with the Hubble Space Telescope (41 ks) and the Far Ultraviolet Spectroscopic Explorer (91 ks). To extract the line-of-sight covering factors and ionic column densities, we separately fit two groups of absorption lines: the Lyman series and the CNO lithium-like doublets. For the CNO doublets we assume that all three ions share the same covering factors. The fitting method applied here overcomes some limitations of the traditional method using individual doublet pairs; it allows for the treatment of more complex, physically realistic scenarios for the absorption-emission geometry and eliminates systematic errors that we show are introduced by spectral noise. We derive velocity-dependent solutions based on two models of geometrical covering -- a single covering factor for all background emission sources, and separate covering factors for the continuum and emission lines. Although both models give good statistical fits to the observed absorption, we favor the model with two covering factors because: (a) the best-fit covering factors for both emission sources are similar for the independent Lyman series and CNO doublet fits; (b) the fits are consistent with full coverage of the continuum source and partial coverage of the emission lines by the absorbers, as expected from the relative sizes of the nuclear emission components; and (c) it provides a natural explanation for variability in the Ly$\\alpha$ absorption detected in an earlier epoch. We also explore physical and geometrical constraints on the outflow from these results.
Improving Planck calibration by including frequency-dependent relativistic corrections
Quartin, Miguel
2015-01-01T23:59:59.000Z
The Planck satellite detectors are calibrated in the 2015 release using the "orbital dipole", which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10^(-3), due to coupling with the "solar dipole" (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevant for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.
Quantum Latin squares and unitary error bases
Benjamin Musto; Jamie Vicary
2015-04-10T23:59:59.000Z
In this paper we introduce quantum Latin squares, combinatorial quantum objects which generalize classical Latin squares, and investigate their applications in quantum computer science. Our main results are on applications to unitary error bases (UEBs), basic structures in quantum information which lie at the heart of procedures such as teleportation, dense coding and error correction. We present a new method for constructing a UEB from a quantum Latin square equipped with extra data. Developing construction techniques for UEBs has been a major activity in quantum computation, with three primary methods proposed: shift-and-multiply, Hadamard, and algebraic. We show that our new approach simultaneously generalizes the shift-and-multiply and Hadamard methods. Furthermore, we explicitly construct a UEB using our technique which we prove cannot be obtained from any of these existing methods.
Global fits to neutrino oscillation data
Thomas Schwetz
2006-06-06T23:59:59.000Z
I summarize the determination of neutrino oscillation parameters within the three-flavor framework from world neutrino oscillation data with date of May 2006, including the first results from the MINOS long-baseline experiment. It is illustrated how the determination of the leading "solar" and "atmospheric" parameters, as well as the bound on $\\theta_{13}$ emerge from an interplay of various complementary data sets. Furthermore, I discuss possible implications of sub-leading three-flavor effects in present atmospheric neutrino data induced by $\\Delta m^2_{21}$ and $\\theta_{13}$ for the bound on $\\theta_{13}$ and non-maximal values of $\\theta_{23}$, emphasizing, however, that these effects are not statistically significant at present. Finally, in view of the upcoming MiniBooNE results I briefly comment on the problem to reconcile the LSND signal.
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-25T23:59:59.000Z
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
Systematic Errors in measurement of b1
Wood, S A
2014-10-27T23:59:59.000Z
A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b(1) structure function at Jefferson Lab.
Shared Dosimetry Error in Epidemiological Dose-Response Analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23T23:59:59.000Z
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ?6¼0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the MayakWorker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
Message passing in fault tolerant quantum error correction
Z. W. E. Evans; A. M. Stephens
2008-06-13T23:59:59.000Z
Inspired by Knill's scheme for message passing error detection, here we develop a scheme for message passing error correction for the nine-qubit Bacon-Shor code. We show that for two levels of concatenated error correction, where classical information obtained at the first level is used to help interpret the syndrome at the second level, our scheme will correct all cases with four physical errors. This results in a reduction of the logical failure rate relative to conventional error correction by a factor proportional to the reciprocal of the physical error rate.
Managing Errors to Reduce Accidents in High Consequence Networked Information Systems
Ganter, J.H.
1999-02-01T23:59:59.000Z
Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classify these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.
Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas
E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team
2010-05-19T23:59:59.000Z
The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E × B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16T23:59:59.000Z
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Multi-Ridge Fitting for Ring-Diagram Helioseismology
Greer, Benjamin J; Toomre, Juri
2014-01-01T23:59:59.000Z
Inferences of sub-surface flow velocities using local domain ring-diagram helioseismology depend on measuring the frequency splittings of oscillation modes seen in acoustic power spectra. Current methods for making these measurements utilize maximum-likelihood fitting techniques to match a model of modal power to the spectra. The model typically describes a single oscillation mode, and each mode in a given power spectrum is fit independently. We present a new method that produces measurements with greater reliability and accuracy by fitting multiple modes simultaneously. We demonstrate how this method permits measurements of sub-surface flows deeper into the Sun while providing higher uniformity in data coverage and velocity response closer to the limb of the solar disk. While the previous fitting method performs better for some measurements of low-phase-speed modes, we find this new method to be particularly useful for high phase-speed modes and small spatial areas.
Shrink fit effects on rotordynamic stability: experimental and theoretical study
Jafri, Syed Muhammad Mohsin
2007-09-17T23:59:59.000Z
This dissertation presents an experimental and theoretical study of subsynchronous rotordynamic instability in rotors caused by interference and shrink fit interfaces. The experimental studies show the presence of strong unstable subsynchronous...
LADWP- Feed-in Tariff (FiT) Program
Broader source: Energy.gov [DOE]
LADWP is providing a Feed-in Tariff (FiT) program to support the development of renewable energy projects in its territory. All technologies eligible for compliance with the state's renewables po...
Fitting In: Extreme Corporate Wellness and Organizational Communication
James, Eric Preston
2014-07-31T23:59:59.000Z
In this dissertation I examine the intersection of organizational communication and what I name extreme corporate wellness. I define extreme corporate wellness as the push towards more radical fitness and workplace health promotion via the exercise...
FCourse: Learn to Swim Level 6: Fitness Swimmer
Azevedo, Ricardo
Purpose To refine strokes so participants swim them with more ease, efficiency, power and smoothness Pull buoy o Fins o Pace clock o Paddles Describe the principles of setting up a fitness program
Structural connections in plywood friction-fit construction
Wagner, Mali E. (Mali Esther)
2014-01-01T23:59:59.000Z
CNC mills allow precise fabrication of planar parts with embedded joinery which can be assembled into complex 3D geometries without the use of foreign mechanical fasteners. This thesis studies the behavior of the friction-fit ...
AHA Recognizes Fit-Friendly Worksites at SRS
Broader source: Energy.gov [DOE]
AIKEN, S.C. – Two contractors supporting the EM program at the Savannah River Site (SRS) were recognized recently as Fit-Friendly Worksites by the American Heart Association (AHA).
Fitness Uniform Deletion: A Simple Way to Preserve Diversity
Hutter, Marcus
is the gradual decline in population diversity that tends to occur over time. This can slow a system's progress. In this paper we present the Fitness Uniform Deletion Scheme (FUDS), a simple but somewhat unconventional ap
LADWP- Feed-in Tariff (FiT) Program (California)
Broader source: Energy.gov [DOE]
Note: LADWP accepted applications for the second 20 MW allocation of the 100 MW FiT Set Pricing Program between July 8 and July 12, 2013. This program is the first component of a 150 megawatt (MW)...
A Note on Fitting ideals Jonathan A. Huang
Yorke, James
by H. Fitting; the canonical reference is D. G. Northcott's textbook Finite Free Resolutions ideal F , created from Gauss sums, is contained in the annihilator ideal of ClF ; a more general
Averaging cross section data so we can fit it
Brown, D. [Brookhaven National Lab. (BNL), Upton, NY (United States). NNDC
2014-10-23T23:59:59.000Z
The ^{56}Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).
The effect of temperature and humidity on respirator fit
Niekerk, Gary
1986-01-01T23:59:59.000Z
THE EFFECT OF TEMPERATURE AND HUMIDITY ON RESPIRATOR FIT A Thesis by GARY NIEKERK Submitted to the Graduate College of Texas ASM University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE August 1986 Maj... or Subject: Industr ial Hygiene THE EFFECT OF TEMPERATURE AND HUMIDITY ON RESPIRATOR FIT A Thesis by Gary Niekerk Approved as to style and content by: c . o n ( Chairman of Coami ttee) a . e non (Member) anie . ones (Member) e an . an (Head...
Quantum Error Correcting Subsystem Codes From Two Classical Linear Codes
Dave Bacon; Andrea Casaccino
2006-10-17T23:59:59.000Z
The essential insight of quantum error correction was that quantum information can be protected by suitably encoding this quantum information across multiple independently erred quantum systems. Recently it was realized that, since the most general method for encoding quantum information is to encode it into a subsystem, there exists a novel form of quantum error correction beyond the traditional quantum error correcting subspace codes. These new quantum error correcting subsystem codes differ from subspace codes in that their quantum correcting routines can be considerably simpler than related subspace codes. Here we present a class of quantum error correcting subsystem codes constructed from two classical linear codes. These codes are the subsystem versions of the quantum error correcting subspace codes which are generalizations of Shor's original quantum error correcting subspace codes. For every Shor-type code, the codes we present give a considerable savings in the number of stabilizer measurements needed in their error recovery routines.
Reply To "Comment on 'Quantum Convolutional Error-Correcting Codes' "
H. F. Chau
2005-06-02T23:59:59.000Z
In their comment, de Almedia and Palazzo \\cite{comment} discovered an error in my earlier paper concerning the construction of quantum convolutional codes (quant-ph/9712029). This error can be repaired by modifying the method of code construction.
Human error contribution to nuclear materials-handling events
Sutton, Bradley (Bradley Jordan)
2007-01-01T23:59:59.000Z
This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...
Evolved Error Management Biases in the Attribution of Anger
Galperin, Andrew
2012-01-01T23:59:59.000Z
von Hippel, W. , Poore, J. C. , Buss, D. M. , et al. (under27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error
MHK technologies include current energy conversion
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
research leverages decades of experience in engineering and design and analysis (D&A) of wind power technologies, and its vast research complex, including high-performance...
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19T23:59:59.000Z
to the models considered in Tsiatis and Ma (2004), our model is less stringent because it allows an unspecified model error distribution and unspecified covariate distribution, not just the latter. With an unspecified model error distribution, the RMM... with measurement error is a very different problem compared to the model considered in Tsiatis and Ma (2004), where the model error distribution has a known parametric form. Consequently, the semiparamet- ric treatment here is also drastically different. Our...
Isolation and Analysis of Optimization Errors MICKEY R. BOYD AND DAVID B. WHALLEY
Whalley, David
features of the optimization viewer include reverse viewing (or undoing) of transformations and the ability, an optimization error isolator is presented that can automatÂ ically determine the first transformation during and after each transformation perÂ formed by the optimizer. One can easily examine the invalid
ERROR MODELS FOR LIGHT SENSORS BY STATISTICAL ANALYSIS OF RAW SENSOR MEASUREMENTS
Potkonjak, Miodrag
silicon solar cell that converts light impulses directly into electrical charges that can easily-based systems including calibration, sensor fusion and power management. We developed a system of statistical the standard procedure is to use error models to enable calibration, in a variant of our approach, we use
Error Analysis in Nuclear Density Functional Theory
Nicolas Schunck; Jordan D. McDonnell; Jason Sarich; Stefan M. Wild; Dave Higdon
2014-07-11T23:59:59.000Z
Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.
Franklin Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) Environmental Assessments (EA)Budget(DANCE) TargetFormsTrouble Shooting and Error
Edison Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisitingContract ManagementDiscoveringESnet UpdateEarthTrouble Shooting and Error
ForceFit: a code to fit classical force fields to ab-initio potential energy surfaces
Henson, Neil Jon [Los Alamos National Laboratory; Waldher, Benjamin [WSU; Kuta, Jadwiga [WSU; Clark, Aurora [WSU; Clark, Aurora E [NON LANL
2009-01-01T23:59:59.000Z
The ForceFit program package has been developed for fitting classical force field parameters based upon a force matching algorithm to quantum mechanical gradients of configurations that span the potential energy surface of the system. The program, which runs under Unix and is written in C++, is an easy to use, nonproprietary platform that enables gradient fitting of a wide variety of functional force field forms to quantum mechanical information obtained from an array of common electronic structure codes. All aspects of the fitting process are run from a graphical user interface, from the parsing of quantum mechanical data, assembling of a potential energy surface database, setting the force field and variables to be optimized, choosing a molecular mechanics code for comparison to the reference data, and finally, the initiation of a least squares minimization algorithm. Furthermore, the code is based on a modular templated code design that enables the facile addition of new functionality to the program.
Susceptibility of Commodity Systems and Software to Memory Soft Errors
Riska, Alma
Susceptibility of Commodity Systems and Software to Memory Soft Errors Alan Messer, Member, IEEE Abstract--It is widely understood that most system downtime is acounted for by programming errors transient errors in computer system hardware due to external factors, such as cosmic rays. This work
A Taxonomy of Number Entry Error Sarah Wiseman
Cairns, Paul
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 350 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
A Taxonomy of Number Entry Error Sarah Wiseman
Subramanian, Sriram
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
Predictors of Threat and Error Management: Identification of Core
Predictors of Threat and Error Management: Identification of Core Nontechnical Skills In normal flight operations, crews are faced with a variety of external threats and commit a range of errors of these threats and errors therefore forms an essential element of enhancing performance and minimizing risk
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2004-01-01T23:59:59.000Z
Current-controlled logic and single electron logic processors have been investigated with respect to thermal-induced bit error. A maximal error rate for both logic processors is regarded as one bit-error/year/chip. A maximal clock frequency...
Bolstered Error Estimation Ulisses Braga-Neto a,c
Braga-Neto, Ulisses
the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian
Polian, Ilia
of soft errors in modern microprocessors has been reported to never lead to a system failure. Any techniques are enhanced by a methodology to handle soft errors on address bits. Furthermore, we demonstrate]. Consequently, many state-of-the art systems provide soft error detection and correction capabilities [Hass 89
Fitness for duty in the nuclear industry: Update of the technical issues 1996
Durbin, N.; Grant, T. [eds.] [Battelle Seattle Research Center, WA (United States)
1996-05-01T23:59:59.000Z
The purpose of this report is to provide an update of information on the technical issues surrounding the creation, implementation, and maintenance of fitness-for-duty (FFD) policies and programs. It has been prepared as a resource for Nuclear Regulatory Commission (NRC) and nuclear power plant personnel who deal with FFD programs. It contains a general overview and update on the technical issues that the NRC considered prior to the publication of its original FFD rule and the revisions to that rule (presented in earlier NUREG/CRs). It also includes chapters that address issues about which there is growing concern and/or about which there have been substantial changes since NUREG/CR-5784 was published. Although this report is intended to support the NRC`s rule making on fitness for duty, the conclusions of the authors of this report are their own and do not necessarily represent the opinions of the NRC.
aMCfast: automation of fast NLO computations for PDF fits
Valerio Bertone; Rikkert Frederix; Stefano Frixione; Juan Rojo; Mark Sutton
2014-06-30T23:59:59.000Z
We present the interface between MadGraph5_aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPLgrid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMCfast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.
Technological Advancements and Error Rates in Radiation Therapy Delivery
Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States)
2011-11-15T23:59:59.000Z
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.
Locked modes and magnetic field errors in MST
Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.
1992-06-01T23:59:59.000Z
In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.
EE Regional Technology Roadmap Includes comparison
EE Regional Technology Roadmap Includes comparison against 6th Power Plan (Update cyclically Data Clearinghouse BPA/RTF NEEA/Regional Programs Group Update Regional EE Technology Roadmap Lighting
DIDACTICAL HOLOGRAPHIC EXHIBIT INCLUDING (HOLOGRAPHIC TELEVISION)
de Aguiar, Marcus A. M.
DIDACTICAL HOLOGRAPHIC EXHIBIT INCLUDING HoloTV (HOLOGRAPHIC TELEVISION) JosÃ© J. Lunazzi , DanielCampinasSPBrasil Abstract: Our Institute of Physics exposes since 1980 didactical exhibitions of holography in Brazil where
Sessions include: Beginning Farmer and Rancher
Watson, Craig A.
Sessions include: Beginning Farmer and Rancher New Markets and Regulations Food Safety Good Bug, Bad Bug ID Horticulture Hydroponics Livestock and Pastured Poultry Mushrooms Organic Live animal exhibits Saturday evening social, and Local foods Florida Small Farms and Alternative
Gas storage materials, including hydrogen storage materials
Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji
2014-11-25T23:59:59.000Z
A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.
Gas storage materials, including hydrogen storage materials
Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji
2013-02-19T23:59:59.000Z
A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.
Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others
2013-09-01T23:59:59.000Z
We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
Fitting Single Particle Energies in $sdgh$ Major Shell
Dikmen, E; Cengiz, Y
2015-01-01T23:59:59.000Z
We have performed two kinds of non-linear fitting procedures to the single-particle energies in the $sdgh$ major shell to obtain better shell model results. The low-lying energy eigenvalues of the light Sn isotopes with $A=103-110$ in the $sdgh$-shell are calculated in the framework of the nuclear shell model by using CD-Bonn two-body effective nucleon-nucleon interaction. The obtained energy eigenvalues are fitted to the corresponding experimental values by using two different non-linear fitting procedures, i.e., downhill simplex method and clonal selection method. The unknown single-particle energies of the states $2s_{1/2}$, $1d_{3/2}$, and $0h_{11/2}$ are used in the fitting methods to obtain better spectra of the $^{104,106,108,110}$Sn isotopes. We compare the energy spectra of the $^{104,106,108,110}$Sn and $^{103,105,107,109}$Sn isotopes with/without a nonlinear fit to the experimental results.
Fitting Single Particle Energies in $sdgh$ Major Shell
E. Dikmen; O. Öztürk; Y. Cengiz
2015-02-12T23:59:59.000Z
We have performed two kinds of non-linear fitting procedures to the single-particle energies in the $sdgh$ major shell to obtain better shell model results. The low-lying energy eigenvalues of the light Sn isotopes with $A=103-110$ in the $sdgh$-shell are calculated in the framework of the nuclear shell model by using CD-Bonn two-body effective nucleon-nucleon interaction. The obtained energy eigenvalues are fitted to the corresponding experimental values by using two different non-linear fitting procedures, i.e., downhill simplex method and clonal selection method. The unknown single-particle energies of the states $2s_{1/2}$, $1d_{3/2}$, and $0h_{11/2}$ are used in the fitting methods to obtain better spectra of the $^{104,106,108,110}$Sn isotopes. We compare the energy spectra of the $^{104,106,108,110}$Sn and $^{103,105,107,109}$Sn isotopes with/without a nonlinear fit to the experimental results.
Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks
Li, Ruoxin
2010-01-01T23:59:59.000Z
We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.
Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks
Ruoxin Li; Kurt Gibble
2010-08-09T23:59:59.000Z
We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.
In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors
Przasnyski, Zbigniew; Seal, Kala Chand
2011-01-01T23:59:59.000Z
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.
E791 DATA ACQUISITION SYSTEM Error reports received ; no new errors reported
Fermi National Accelerator Laboratory
of events written to tape. 18 #12; Error and Status Displays Mailbox For Histogram Requests VaxÂonline Event Display VAX 11 / 780 Event Reconstruction Event Display Detector Monitoring 3 VAX Workstations 42 EXABYTE of the entire E791 DA system. The VAX 11/780 was the user interface to the VME part of the system, via the DA
Error and jitter effect studies on the SLED for BEPCII-linac
Shi-Lun, Pei; Ou-Zheng, Xiao
2011-01-01T23:59:59.000Z
RF pulse compressor is a device to convert a long RF pulse to a short one with much higher peak RF magnitude. SLED can be regarded as the earliest RF pulse compressor used in large scale linear accelerators. It is widely studied around the world and applied in the BEPC and BEPCII linac for many years. During the routine operation, the error and jitter effects will deteriorate the SLED performance either on the output electromagnetic wave amplitude or phase. The error effects mainly include the frequency drift induced by cooling water temperature variation and the frequency/Q0/{\\beta} unbalances between the two energy storage cavities caused by mechanical fabrication or microwave tuning. The jitter effects refer to the PSK switching phase and time jitters. In this paper, we re-derived the generalized formulae for the conventional SLED used in the BEPCII linac. At last, the error and jitter effects on the SLED performance are investigated.
Graphical Quantum Error-Correcting Codes
Sixia Yu; Qing Chen; C. H. Oh
2007-09-12T23:59:59.000Z
We introduce a purely graph-theoretical object, namely the coding clique, to construct quantum errorcorrecting codes. Almost all quantum codes constructed so far are stabilizer (additive) codes and the construction of nonadditive codes, which are potentially more efficient, is not as well understood as that of stabilizer codes. Our graphical approach provides a unified and classical way to construct both stabilizer and nonadditive codes. In particular we have explicitly constructed the optimal ((10,24,3)) code and a family of 1-error detecting nonadditive codes with the highest encoding rate so far. In the case of stabilizer codes a thorough search becomes tangible and we have classified all the extremal stabilizer codes up to 8 qubits.
Output error identification of hydrogenerator conduit dynamics
Vogt, M.A.; Wozniak, L. (Illinois Univ., Urbana, IL (USA)); Whittemore, T.R. (Bureau of Reclamation, Denver, CO (USA))
1989-09-01T23:59:59.000Z
Two output error model reference adaptive identifiers are considered for estimating the parameters in a reduced order gate position to pressure model for the hydrogenerator. This information may later be useful in an adaptive controller. Gradient and sensitivity functions identifiers are discussed for the hydroelectric application and connections are made between their structural differences and relative performance. Simulations are presented to support the conclusion that the latter algorithm is more robust, having better disturbance rejection and less plant model mismatch sensitivity. For identification from recorded plant data from step gate inputs, the other algorithm even fails to converge. A method for checking the estimated parameters is developed by relating the coefficients in the reduced order model to head, an externally measurable parameter.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M [ORNL] [ORNL; Walker, William C [ORNL] [ORNL
2014-01-01T23:59:59.000Z
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Quantum Error Correction with magnetic molecules
José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas
2014-08-22T23:59:59.000Z
Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.
Nondestructive inspection of Piper PA25 forward spar fittings
Moore, D.G. [Sandia National Labs., Albuquerque, NM (United States)
1995-07-01T23:59:59.000Z
The Federal Aviation Administration`s (FAA`s) Aging Aircraft NDI Validation Center (AANC) at Sandia National Laboratories applied two nondestructive inspection (NDI) techniques to inspect a forward spar fuselage attachment fitting. The techniques used were based on radiography and ultrasonic test methods. The combination of these techniques did reveal material thinning of two spar fittings from Piper PA25 aircraft. However, crack detection near a notch design feature could not be performed. Based on the results of these experiments, an ultrasonic test procedure was subsequently developed for the material thinning. The procedure has since been incorporated by the FAA into a revision of Airworthiness Directive 93-21-12.
Electric Power Monthly, August 1990. [Glossary included
Not Available
1990-11-29T23:59:59.000Z
The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.
Communication in automation, including networking and wireless
Antsaklis, Panos
Communication in automation, including networking and wireless Nicholas Kottenstette and Panos J and networking in automation is given. Digital communication fundamentals are reviewed and networked control are presented. 1 Introduction 1.1 Why communication is necessary in automated systems Automated systems use
Electrochemical cell including ribbed electrode substrates
Breault, R.D.; Goller, G.J.; Roethlein, R.J.; Sprecher, G.C.
1981-07-21T23:59:59.000Z
An electrochemical cell including an electrolyte retaining matrix layer located between and in contact with cooperating anode and cathode electrodes is disclosed herein. Each of the electrodes is comprised of a ribbed (or grooved) substrate including a gas porous body as its main component and a catalyst layer located between the substrate and one side of the electrolyte retaining matrix layer. Each substrate body includes a ribbed section for receiving reactant gas and lengthwise side portions on opposite sides of the ribbed section. Each of the side portions includes a channel extending along its entire length from one surface thereof (e.g., its outer surface) to but stopping short of an opposite surface (e.g., its inner surface) so as to provide a web directly between the channel and the opposite surface. Each of the channels is filled with a gas impervious substance and each of the webs is impregnated with a gas impervious substance so as to provide a gas impervious seal along the entire length of each side portion of each substrate and between the opposite faces thereof (e.g., across the entire thickness thereof).
Prices include compostable serviceware and linen tablecloths
California at Davis, University of
& BLACK BEAN ENCHILADAS Fresh corn tortillas stuffed with tender brown butter sautéed butternut squash, black beans and yellow on- ions, garnished with avocado and sour cream. $33 per person EDAMAME & CORN SQUASH & BLACK BEAN ENCHILADA FREE RANGE CHICK- EN SANDWICH PLATED ENTREES All plated entrees include
Energy Consumption of Personal Computing Including Portable
Namboodiri, Vinod
Energy Consumption of Personal Computing Including Portable Communication Devices Pavel Somavat1 consumption, questions are being asked about the energy contribution of computing equipment. Al- though studies have documented the share of energy consumption by this type of equipment over the years, research
2006-01-01T23:59:59.000Z
This document concerns the award of a contract for minor metalwork, metal fittings, cladding and roofing at CERN. The Finance Committee is invited to agree to the negotiation of a contract with the firm INIZIATIVE INDUSTRIALI SRL (IT), the lowest bidder, for the provision of minor metalwork, metal fittings, cladding and roofing at CERN for three years for a total amount not exceeding 1 467 895 euros (2 258 301 Swiss francs), not subject to revision for two years. The contract will include options for two one-year extensions beyond the initial three-year period.
C -parameter distribution at N 3 LL ' including power corrections
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; Stewart, Iain W.
2015-05-01T23:59:59.000Z
We compute the e?e? C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(?3s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments ?n. To eliminate an O(?QCD) renormalon ambiguity in the soft function, we switch from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter ?1. We show how to simultaneously account for running effects in ?1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for ?s(mZ) and ?1, the perturbative uncertainty in our cross section is ? 2.5% at Q=mZ.
Goodness-of-Fit Tests to study the Gaussianity of the MAXIMA data
L. Cayon; F. Argueso; E. Martinez-Gonzalez; J. L. Sanz
2003-06-09T23:59:59.000Z
Goodness-of-Fit tests, including Smooth ones, are introduced and applied to detect non-Gaussianity in Cosmic Microwave Background simulations. We study the power of three different tests: the Shapiro-Francia test (1972), the uncategorised smooth test developed by Rayner and Best(1990) and the Neyman's Smooth Goodness-of-fit test for composite hypotheses (Thomas and Pierce 1979). The Smooth Goodness-of-Fit tests are designed to be sensitive to the presence of ``smooth'' deviations from a given distribution. We study the power of these tests based on the discrimination between Gaussian and non-Gaussian simulations. Non-Gaussian cases are simulated using the Edgeworth expansion and assuming pixel-to-pixel independence. Results show these tests behave similarly and are more powerful than tests directly based on cumulants of order 3, 4, 5 and 6. We have applied these tests to the released MAXIMA data. The applied tests are built to be powerful against detecting deviations from univariate Gaussianity. The Cholesky matrix corresponding to signal (based on an assumed cosmological model) plus noise is used to decorrelate the observations previous to the analysis. Results indicate that the MAXIMA data are compatible with Gaussianity.
SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration
Chu, A; Ahmad, M; Chen, Z; Nath, R [Yale New Haven Hospital/School of Medicine Yale University, New Haven, CT (United States); Feng, W [New York Presbyterian Hospital, Tenafly, NJ (United States)
2014-06-01T23:59:59.000Z
Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilities of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.
Numerical studies of the metamodel fitting and validation processes
Boyer, Edmond
algorithms and application to a nuclear safety computer code show the relevance of this new sequential this problem consists in replacing cpu time expensive computer models by cpu inexpensive mathematical functions to fit the metamodel) has to provide adequate space filling properties. We adopt a numerical approach
Fitness Space Structure of a Neuromechanical Randall D. Beer
Beer, Randall D.
93 Fitness Space Structure of a Neuromechanical System Randall D. Beer Cognitive Science Program, and the impact of network architecture on walking performance and evolvability (Beer, 1995a; Beer, Chiel, & Gallagher, 1999; Beer & Gallagher, 1992; Chiel, Beer, & Gallagher, 1999; Psujek, Ames, & Beer, 2006
Dynamic Cooperative Coevolutionary Sensor Deployment via Localized Fitness Evaluation
Chen, Yuanzhu Peter
Dynamic Cooperative Coevolutionary Sensor Deployment via Localized Fitness Evaluation Xingyan Jiang used to evaluate the quality of sensor placement. The first one is sensing coverage, which is the area interest in autonomous sensor deployment, where a sensor can only communicate with those within a limited
Structured Probabilistic Models of Proteins across Spatial and Fitness Landscapes
acid composition in response to changing fitness landscapes. The thesis of this dissertation interactions quickly and accurately. We then develop a method of learning generative models of amino acid cocktails that remain effective against natural possible mutant variants of the tar- get. Towards this
Phenotypic Plasticity Opposes Species Invasions by Altering Fitness Surface
Phenotypic Plasticity Opposes Species Invasions by Altering Fitness Surface Scott D. Peacor1 ecological processes. However, the influence on invasions of phenotypic plasticity, a key component of many species interactions, is unknown. We present a model in which phenotypic plasticity of a resident species
Tutorial, GECCO'05, Washington D.C. Fitness Approximation
Yang, Shengxiang
the potentially best individuals with the help of estimated error bound (Emmerich et al, 2002, 2005; Ulmer et al confidence bound (Emmerich et al, 2002): f = f (x')- (x') (>0) Â· PoI (Probability of Improvement) (Ulmer et al, 2003, Ong et al 2005) Â· Expected Improvement (Schonlau, 1998; Emmerich et al, 2005) (Abstracted
Subterranean barriers including at least one weld
Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.
2007-01-09T23:59:59.000Z
A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.
Power generation method including membrane separation
Lokhandwala, Kaaeid A. (Union City, CA)
2000-01-01T23:59:59.000Z
A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.
Rotor assembly including superconducting magnetic coil
Snitchler, Gregory L. (Shrewsbury, MA); Gamble, Bruce B. (Wellesley, MA); Voccio, John P. (Somerville, MA)
2003-01-01T23:59:59.000Z
Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.
Electric power monthly, September 1990. [Glossary included
Not Available
1990-12-17T23:59:59.000Z
The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)
Quantum root-mean-square error and measurement uncertainty relations
Paul Busch; Pekka Lahti; Reinhard F Werner
2014-10-10T23:59:59.000Z
Recent years have witnessed a controversy over Heisenberg's famous error-disturbance relation. Here we resolve the conflict by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. We discuss two approaches to adapting the classic notion of root-mean-square error to quantum measurements. One is based on the concept of noise operator; its natural operational content is that of a mean deviation of the values of two observables measured jointly, and thus its applicability is limited to cases where such joint measurements are available. The second error measure quantifies the differences between two probability distributions obtained in separate runs of measurements and is of unrestricted applicability. We show that there are no nontrivial unconditional joint-measurement bounds for {\\em state-dependent} errors in the conceptual framework discussed here, while Heisenberg-type measurement uncertainty relations for {\\em state-independent} errors have been proven.
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01T23:59:59.000Z
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
A two reservoir model of quantum error correction
James P. Clemens; Julio Gea-Banacloche
2005-08-22T23:59:59.000Z
We consider a two reservoir model of quantum error correction with a hot bath causing errors in the qubits and a cold bath cooling the ancilla qubits to a fiducial state. We consider error correction protocols both with and without measurement of the ancilla state. The error correction acts as a kind of refrigeration process to maintain the data qubits in a low entropy state by periodically moving the entropy to the ancilla qubits and then to the cold reservoir. We quantify the performance of the error correction as a function of the reservoir temperatures and cooling rate by means of the fidelity and the residual entropy of the data qubits. We also make a comparison with the continuous quantum error correction model of Sarovar and Milburn [Phys. Rev. A 72 012306].
Trial application of a technique for human error analysis (ATHEANA)
Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Parry, G.W. [NUS, Gaithersburg, MD (United States)] [and others
1996-10-01T23:59:59.000Z
The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.
Temperature-dependent errors in nuclear lattice simulations
Dean Lee; Richard Thomson
2007-01-17T23:59:59.000Z
We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local "well-tempered" lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.
Hofland, G.S.; Barton, C.C.
1990-10-01T23:59:59.000Z
The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program`s results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig.
Error estimates for the Euler discretization of an optimal control ...
Joseph FrÃ©dÃ©ric Bonnans
2014-12-10T23:59:59.000Z
Dec 10, 2014 ... Abstract: We study the error introduced in the solution of an optimal control problem with first order state constraints, for which the trajectories ...
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Per Carlson; Conny Wannemark
2005-05-10T23:59:59.000Z
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Optimized Learning with Bounded Error for Feedforward Neural Networks
Maggiore, Manfredi
Optimized Learning with Bounded Error for Feedforward Neural Networks A. Alessandri, M. Sanguineti-based learnings. A. Alessandri is with the Naval Automatio
New Fractional Error Bounds for Polynomial Systems with ...
2014-07-27T23:59:59.000Z
Our major result extends the existing error bounds from the system involving only a ... linear complementarity systems with polynomial data as well as high-order ...
Homological Error Correction: Classical and Quantum Codes
H. Bombin; M. A. Martin-Delgado
2006-05-10T23:59:59.000Z
We prove several theorems characterizing the existence of homological error correction codes both classically and quantumly. Not every classical code is homological, but we find a family of classical homological codes saturating the Hamming bound. In the quantum case, we show that for non-orientable surfaces it is impossible to construct homological codes based on qudits of dimension $D>2$, while for orientable surfaces with boundaries it is possible to construct them for arbitrary dimension $D$. We give a method to obtain planar homological codes based on the construction of quantum codes on compact surfaces without boundaries. We show how the original Shor's 9-qubit code can be visualized as a homological quantum code. We study the problem of constructing quantum codes with optimal encoding rate. In the particular case of toric codes we construct an optimal family and give an explicit proof of its optimality. For homological quantum codes on surfaces of arbitrary genus we also construct a family of codes asymptotically attaining the maximum possible encoding rate. We provide the tools of homology group theory for graphs embedded on surfaces in a self-contained manner.
Multiverse rate equation including bubble collisions
Michael P. Salem
2013-02-19T23:59:59.000Z
The volume fractions of vacua in an eternally inflating multiverse are described by a coarse-grain rate equation, which accounts for volume expansion and vacuum transitions via bubble formation. We generalize the rate equation to account for bubble collisions, including the possibility of classical transitions. Classical transitions can modify the details of the hierarchical structure among the volume fractions, with potential implications for the staggering and Boltzmann-brain issues. Whether or not our vacuum is likely to have been established by a classical transition depends on the detailed relationships among transition rates in the landscape.
Stress and domestication traits increase the relative fitness of cropwild hybrids in sunflower
wild hybrid, domestication, G · E interactions, GM crops, herbicide, introgression, relative fitness
Alabama in Huntsville, University of
HPE Fitness and Wellness Certificate Completion Form Instructions As you are nearing completion of (or have already completed) your Fitness and Wellness Credit Certificate, this is the final step met, individuals must submit a completed HPE Fitness and Wellness Credit Certificate Completion Form
Optical panel system including stackable waveguides
DeSanto, Leonard (Dunkirk, MD); Veligdan, James T. (Manorville, NY)
2007-11-20T23:59:59.000Z
An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.
Optical panel system including stackable waveguides
DeSanto, Leonard; Veligdan, James T.
2007-03-06T23:59:59.000Z
An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.
Thermovoltaic semiconductor device including a plasma filter
Baldasaro, Paul F. (Clifton Park, NY)
1999-01-01T23:59:59.000Z
A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.
Exploring NK Fitness Landscapes Using an Imitative Learning Search
Fontanari, José F
2015-01-01T23:59:59.000Z
The idea that a group of cooperating agents can solve problems more efficiently than when those agents work independently is hardly controversial, despite the little quantitative groundwork to support it. Here we investigate the performance of a group of agents in locating the global maxima of NK fitness landscapes with varying degrees of ruggedness. Cooperation is taken into account through imitative learning and the broadcasting of messages informing on the fitness of each agent. We find a trade-off between the group size and the frequency of imitation: for rugged landscapes, too much imitation or too large a group yield a performance poorer than that of independent agents. By decreasing the diversity of the group, imitative learning may lead to duplication of work and hence to a decrease of its effective size. However, when the parameters are set to optimal values the cooperative group substantially outperforms the independent agents.
Canect: Matching You the Best-fit Translation Service
Guo, Yujie
2011-05-31T23:59:59.000Z
information about the project: http://www.torry- ue.com/project/Thesis2011.htm. You will find the proposal file, research data, design materials and the html prototype for the system. I also attached the project presentation, scenario video, user experience... map and testing adobe swf files as supporting materials for the projects. CANECT: MATCHING YOU THE BEST-FIT TRANSLATION SERVICE 2 Design Research As the first part of the thesis, research was carried from three different directions...
Effect of shrink fits on threshold speeds of rotordynamic instability
Mir, MD. Mofazzal Hossain
2001-01-01T23:59:59.000Z
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 CHAPTER IV TEST APPARATUS . . . . 13 CHAPTER V RESULTS AND DISCUSSION . . . 17 17 30 42 42 47 5. 1 Rap Test. . 5. 2 Running Test 5. 3 Modeling and the Prcdiction of Threshold Speed of Instability.... . . . . . . . . . . . , . . . . . . . . . . . 5. 3. 1 Matching the Base Case. 5. 3. 2 Gunter's Prediction Using C, q 5. 3. 3 Modeling and Prediction of the Threshold Speed Using the XLTRC Code. . . . 51 5. 4 Prediction of the Onset Speed of Instability for a Tight Inteiference Fit...
ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK
Ellis, Dan
ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK Manuel J. Reyes. This structure reduces the error rate on the Aurora 2 noisy English digits task by more than 50% compared development of tandem systems showed an improvement in the performance on the Aurora task [2] of these systems
Numerical Construction of Likelihood Distributions and the Propagation of Errors
J. Swain; L. Taylor
1997-12-12T23:59:59.000Z
The standard method for the propagation of errors, based on a Taylor series expansion, is approximate and frequently inadequate for realistic problems. A simple and generic technique is described in which the likelihood is constructed numerically, thereby greatly facilitating the propagation of errors.
Calibration and Error in Placental Molecular Clocks: A Conservative
Hadly, Elizabeth
Calibration and Error in Placental Molecular Clocks: A Conservative Approach Using for calibrating both mitogenomic and nucleogenomic placental timescales. We applied these reestimates to the most calibration error may inflate the power of the molecular clock when testing the time of ordinal
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Bai, Zhaojun
gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance for integrated groundwater models, which are implicitly coupled to another model, such as surface water models the correspondence between the residual error in the preconditioned linear system and the solution error. Using
PROPAGATION OF ERRORS IN SPATIAL ANALYSIS Peter P. Siska
Hung, I-Kuai
, the conversion of data from analog to digital form used to be an extremely time-consuming process. At present process then the resulting error is inflated up to 20 percent for each grid cell of the final map. The magnitude of errors naturally increases with an addition of every new layer entering the overlay process
Error detection through consistency checking Peng Gong* Lan Mu#
Silver, Whendee
Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers
Mutual information, bit error rate and security in Wójcik's scheme
Zhanjun Zhang
2004-02-21T23:59:59.000Z
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.
Uniform and optimal error estimates of an exponential wave ...
2014-05-01T23:59:59.000Z
of the error propagation, cut-off of the nonlinearity, and the energy method. ...... gives Lemma 3.4 for the local truncation error, which is of spectral order in ... estimates, we adopt a strategy similar to the finite difference method [4] (cf. diagram.
Quasi-sparse eigenvector diagonalization and stochastic error correction
Dean Lee
2000-08-30T23:59:59.000Z
We briefly review the diagonalization of quantum Hamiltonians using the quasi-sparse eigenvector (QSE) method. We also introduce the technique of stochastic error correction, which systematically removes the truncation error of the QSE result by stochastically sampling the contribution of the remaining basis states.
Mining API Error-Handling Specifications from Source Code
Xie, Tao
Mining API Error-Handling Specifications from Source Code Mithun Acharya and Tao Xie Department it difficult to mine error-handling specifications through manual inspection of source code. In this paper, we, without any user in- put. In our framework, we adapt a trace generation technique to distinguish
Entanglement and Quantum Error Correction with Superconducting Qubits
Entanglement and Quantum Error Correction with Superconducting Qubits A Dissertation Presented David Reed All rights reserved. #12;Entanglement and Quantum Error Correction with Superconducting is to use superconducting quantum bits in the circuit quantum electro- dynamics (cQED) architecture. There
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error
Richardson, David
may not even exist. For this reason we investigate error detection and recovery (EDR) strategies. We may not even exist. For this reason we investigate error detection and recovery (EDR ) strategies. We and implementational questions remain. The second contribution is a formal, geometric approach to EDR. While EDR
Natural Priors, CMSSM Fits and LHC Weather Forecasts
Ben C Allanach; Kyle Cranmer; Christopher G Lester; Arne M Weber
2007-07-05T23:59:59.000Z
Previous LHC forecasts for the constrained minimal supersymmetric standard model (CMSSM), based on current astrophysical and laboratory measurements, have used priors that are flat in the parameter tan beta, while being constrained to postdict the central experimental value of MZ. We construct a different, new and more natural prior with a measure in mu and B (the more fundamental MSSM parameters from which tan beta and MZ are actually derived). We find that as a consequence this choice leads to a well defined fine-tuning measure in the parameter space. We investigate the effect of such on global CMSSM fits to indirect constraints, providing posterior probability distributions for Large Hadron Collider (LHC) sparticle production cross sections. The change in priors has a significant effect, strongly suppressing the pseudoscalar Higgs boson dark matter annihilation region, and diminishing the probable values of sparticle masses. We also show how to interpret fit information from a Markov Chain Monte Carlo in a frequentist fashion; namely by using the profile likelihood. Bayesian and frequentist interpretations of CMSSM fits are compared and contrasted.
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk [Department of Mathematics, Royal Holloway University of London, Egham TW20 0EX (United Kingdom); Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent (Belgium); Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com [Física Teòrica: Informació i Fenomens Quàntics, Universitat Autònoma de Barcelona, ES-08193 Bellaterra, Barcelona (Spain); Mathematical Institute, Budapest University of Technology and Economics, Egry József u 1., Budapest 1111 (Hungary)
2014-10-15T23:59:59.000Z
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states ?{sub 1}, …, ?{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(?{sub 1}, …, ?{sub r}), as recently introduced by Nussbaum and Szko?a in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j
An Efficient Approach towards Mitigating Soft Errors Risks
Sadi, Muhammad Sheikh; Uddin, Md Nazim; Jürjens, Jan
2011-01-01T23:59:59.000Z
Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today's nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant 'device shrinking' reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only 'critical code blocks' and 'critical variables' rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.
Grid-scale Fluctuations and Forecast Error in Wind Power
G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi
2015-03-29T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Grid-scale Fluctuations and Forecast Error in Wind Power
Bel, G; Toots, M; Bandi, M M
2015-01-01T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Engine lubrication circuit including two pumps
Lane, William H.
2006-10-03T23:59:59.000Z
A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.
Models of Procyon A including seismic constraints
P. Eggenberger; F. Carrier; F. Bouchy
2005-01-14T23:59:59.000Z
Detailed models of Procyon A based on new asteroseismic measurements by Eggenberger et al (2004) have been computed using the Geneva evolution code including shellular rotation and atomic diffusion. By combining all non-asteroseismic observables now available for Procyon A with these seismological data, we find that the observed mean large spacing of 55.5 +- 0.5 uHz favours a mass of 1.497 M_sol for Procyon A. We also determine the following global parameters of Procyon A: an age of t=1.72 +- 0.30 Gyr, an initial helium mass fraction Y_i=0.290 +- 0.010, a nearly solar initial metallicity (Z/X)_i=0.0234 +- 0.0015 and a mixing-length parameter alpha=1.75 +- 0.40. Moreover, we show that the effects of rotation on the inner structure of the star may be revealed by asteroseismic observations if frequencies can be determined with a high precision. Existing seismological data of Procyon A are unfortunately not accurate enough to really test these differences in the input physics of our models.
Structure of Bright 2MASS Galaxies: 2D Fits to the Ks-band Surface Brightness Profiles
Daniel H. McIntosh; Ari H. Maller; Neal Katz; Martin D. Weinberg
2002-09-01T23:59:59.000Z
The unprecedented sky coverage and photometric uniformity of the Two Micron All Sky Survey (2MASS) provides a rich resource for obtaining a detailed understanding of the galaxies populating our local (z<0.1) Universe. A full characterization of the physical structure of nearby galaxies is essential for theoretical and observational studies of galaxy evolution and structure formation. We have begun a quantified description of the internal structure and morphology of 10,000 bright (10
Zhao, Gong-Bo, E-mail: gongbo@icosmology.info [National Astronomy Observatories, Chinese Academy of Science, Beijing 100012, ChinaAND (China); Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)
2014-04-01T23:59:59.000Z
Based on a suite of N-body simulations of the Hu-Sawicki model of f(R) gravity with different sets of model and cosmological parameters, we develop a new fitting formula with a numeric code, MGHalofit, to calculate the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. We compare the MGHalofit predictions at various redshifts (z ? 1) to the f(R) simulations and find that the relative error of the MGHalofit fitting formula of P(k) is no larger than 6% at k ? 1 h Mpc{sup –1} and 12% at k in (1, 10] h Mpc{sup –1}, respectively. Based on a sensitivity study of an ongoing and a future spectroscopic survey, we estimate the detectability of a signal of modified gravity described by the Hu-Sawicki model using the power spectrum up to quasi-nonlinear scales.
New axion and hidden photon constraints from a solar data global fit
Vinyoles, Núria; Villante, Francesco L; Basu, Sarbani; Redondo, Javier; Isern, Jordi
2015-01-01T23:59:59.000Z
We present a new statistical analysis that combines helioseismology (sound speed, surface helium and convective radius) and solar neutrino observations (boron and beryllium fluxes) to place upper limits to the properties of non standard weakly interacting particles. Our analysis includes theoretical and observational errors, accounts for tensions between input parameters of solar models and can be easily extended to include other observational constraints. We present two applications to test the method: the well studied case of axions and axion-like particles and the more novel case of low mass hidden photons. For axions we obtain an upper limit at 3 sigma for the axion-photon coupling constant of g_a-gamma solar constraints based on the Standard Solar Models showing the power of our global statistical ap...
Logical Error Rate Scaling of the Toric Code
Fern H. E. Watson; Sean D. Barrett
2014-09-26T23:59:59.000Z
To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead -- the total number of physical qubits required to perform error correction.
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01T23:59:59.000Z
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01T23:59:59.000Z
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Universal Framework for Quantum Error-Correcting Codes
Zhuo Li; Li-Juan Xing
2009-01-04T23:59:59.000Z
We present a universal framework for quantum error-correcting codes, i.e., the one that applies for the most general quantum error-correcting codes. This framework is established on the group algebra, an algebraic notation for the nice error bases of quantum systems. The nicest thing about this framework is that we can characterize the properties of quantum codes by the properties of the group algebra. We show how it characterizes the properties of quantum codes as well as generates some new results about quantum codes.
Wave forces on monotower structures fitted with icebreaking cones
Harrington, Michael Gerard
1987-01-01T23:59:59.000Z
25 FIG. 7. -Schematic of the Model fcebreaking Cones 26 Table 5. Characteristics of the Model Icebreaking Cones. Cone Dt D mass Volume (in. ) (in. ) (slugs) (in. 3) (2) (3) (&) (5) 30 45 60 5. 5 0. 0001 8. 66 0. 00005 6. 12 3. 2 0. 00003 4... and Imposed Ice Loads on a Vertical-sided Structure and a Structure Fitted with an Icebreaking Cone Table 2. Common Spreading Functions and Their Normalization Factors 13 Table 3. Bandwidth Factors and Their Corresponding Structural Responses 18 Table...
Natural Priors, CMSSM Fits and LHC Weather Forecasts
Allanach, B C; Cranmer, Kyle; Lester, Christopher G; Weber, Arne M
2007-08-07T23:59:59.000Z
ar X iv :0 70 5. 04 87 v3 [ he p- ph ] 5 J ul 20 07 Preprint typeset in JHEP style - HYPER VERSION DAMTP-2007-18 Cavendish-HEP-2007-03 MPP-2007-36 Natural Priors, CMSSM Fits and LHC Weather Forecasts Benjamin C Allanach1, Kyle Cranmer2... ’s likely discoveries. There are big differences between nature of the questions answered by a forecast, and the ques- tions that will be answered by the experiments themselves when they have acquired compelling data. A weather forecast predicting “severe...
Fitting Parton Distribution Data with Multiplicative Normalization Uncertainties
The NNPDF Collaboration; Richard D. Ball; Luigi Del Debbio; Stefano Forte; Alberto Guffanti; Jose I. Latorre; Juan Rojo; Maria Ubiali
2010-06-01T23:59:59.000Z
We consider the generic problem of performing a global fit to many independent data sets each with a different overall multiplicative normalization uncertainty. We show that the methods in common use to treat multiplicative uncertainties lead to systematic biases. We develop a method which is unbiased, based on a self--consistent iterative procedure. We demonstrate the use of this method by applying it to the determination of parton distribution functions with the NNPDF methodology, which uses a Monte Carlo method for uncertainty estimation.
Property:Incentive/PVResFitDolKWh | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov YouKizildere I Geothermal PwerPerkins County, Nebraska:PrecourtOid JumpEligSysSize Jump to:PVNPFitDolKWh JumpPVResFitDolKWh
INJECTION STRAIGHT PULSED MAGNET ERROR TOLERANCE STUDY FOR TOP-OFF INJECTION
Wang, G.M.; Shaftan; T.: Fliller; R.; Parker; B.; Heese; R.; Kowalski; S.; Willeke; F.
2011-03-28T23:59:59.000Z
NSLS II is designed to work in top-off injection mode. The injection straight includes a septum and four fast kicker magnets. The pulsed magnet errors will excite a betatron oscillation. This paper gives the formulas of each error contribution to the oscillation amplitude at various source points in the ring. These are compared with simulation results. Based on the simple formulas, we can specify the error tolerances on the pulsed magnets with the goal to minimize the injection transient and scale it to similar machines. The NSLS-II is a 3 GeV third generation synchrotron light source under construction at Brookhaven National Laboratory. Due to its short lifetime, NSLS-II storage ring requires the top-off injection (once per minute) during which the stored beam orbit is highly desired as transparent. But the errors, from the SR pulsed magnets at the injection straight - kickers (non-closed injection bump) and pulsed septum (time-dependent stray field), excite a stored beam betatron oscillation. The magnitude of the perturbation can be large disturning some of the user experiments. In 2010 injection straight review, based on the experts experiences in ALS, DIAMOND, SLS and SPEAR, we came to the conclusion that the acceptable oscillation amplitude at the long straight is set as 100 {micro}m (i.e. 0.7 {sigma}x) in horizontal plane and 12 {micro}m, 2.5 {sigma}y, in vertical plane for NSLS II. This paper gives the analysis estimate of the different error source tolerance from the pulse magnets and scales it to our requirements. The result is compared with simulation.
Pendulum Shifts, Context, Error, and Personal Accountability
Harold Blackman; Oren Hester
2011-09-01T23:59:59.000Z
This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.
Forward Error Correction and Functional Programming
Bull, Tristan Michael
2011-04-25T23:59:59.000Z
de ned which provide an interface to Fabric. A subset of these are included below: inStdLogic :: String -> Fabric (Seq Bool) inStdLogicVector :: (Size x) => String -> Fabric (Seq (Unsigned x)) outStdLogic :: String -> Seq Bool -> Fabric () outStd...LogicVector :: (Size x) => String -> Seq (Unsigned x) -> Fabric () inStdLogic and inStdLogicVector each name an input, while outStdLogic and outStdLogicVector each name an output and return a Fabric. This interface can be used to build a Fabric for the counter example...
Error Prevention as Developed in Airlines
Logan, Timothy J. [Operational Safety, Southwest Airlines, Dallas, TX (United States)], E-mail: tim.logan@wnco.com
2008-05-01T23:59:59.000Z
The airline industry is a high-risk endeavor. Tens of thousands of flights depart each day carrying millions of passengers with the potential for catastrophic consequences. To manage and mitigate this risk, airline operators, labor unions, and the Federal Aviation Administration have developed a partnership approach to improving safety. This partnership includes cooperative programs such as the Aviation Safety Action Partnership and the Flight Operational Quality Assurance. It also involves concentrating on the key aspects of aircraft maintenance reliability and employee training. This report discusses recent enhancements within the airline industry in the areas of proactive safety programs and the move toward safety management systems that will drive improvements in the future.
M. Zalewski; J. Dobaczewski; W. Satula; T. R. Werner
2008-01-07T23:59:59.000Z
A new strategy of fitting the coupling constants of the nuclear energy density functional is proposed, which shifts attention from ground-state bulk to single-particle properties. The latter are analyzed in terms of the bare single-particle energies and mass, shape, and spin core-polarization effects. Fit of the isoscalar spin-orbit and both isoscalar and isovector tensor coupling constants directly to the f5/2-f7/2 spin-orbit splittings in 40Ca, 56Ni, and 48Ca is proposed as a practical realization of this new programme. It is shown that this fit requires drastic changes in the isoscalar spin-orbit strength and the tensor coupling constants as compared to the commonly accepted values but it considerably and systematically improves basic single-particle properties including spin-orbit splittings and magic-gap energies. Impact of these changes on nuclear binding energies is also discussed.
Hung, I-Kuai
Prediction of kriging errors 601 Copyright Â© 2005 John Wiley & Sons, Ltd. Earth Surf. Process. Landforms 30, 601Â612 (2005) Earth Surface Processes and Landforms Earth Surf. Process. Landforms 30, 601). The construction of continuous surfaces including the digital elevation and terrain models (DEM, DTM) can
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes 2006 Abstract Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE of which include ``tall tower'' instrumentation), one grassland site, and one agricultural site, to conduct
Faraday rotation data analysis with least-squares elliptical fitting
White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)
2010-10-15T23:59:59.000Z
A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.
Servo control booster system for minimizing following error
Wise, William L. (Mountain View, CA)
1985-01-01T23:59:59.000Z
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
A Posteriori Error Estimation for - Department of Mathematics ...
Shuhao Cao supervised under Professor Zhiqiang Cai
2013-10-31T23:59:59.000Z
Oct 19, 2013 ... the “correct” Hilbert space the true flux µ?1?×u lies in, to recover a ...... The error heat map shows that ZZ-patch recovery estimator leads.
Quantum error correcting codes based on privacy amplification
Zhicheng Luo
2008-08-10T23:59:59.000Z
Calderbank-Shor-Steane (CSS) quantum error-correcting codes are based on pairs of classical codes which are mutually dual containing. Explicit constructions of such codes for large blocklengths and with good error correcting properties are not easy to find. In this paper we propose a construction of CSS codes which combines a classical code with a two-universal hash function. We show, using the results of Renner and Koenig, that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. While the bit-flip errors can be decoded as efficiently as the classical code used, the problem of efficiently decoding the phase-flip errors remains open.
avoid vocal errors: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Error Avoiding Quantum Codes Quantum Physics (arXiv) Summary: The existence is proved of a class of open quantum...
Rateless and rateless unequal error protection codes for Gaussian channels
Boyle, Kevin P. (Kevin Patrick)
2007-01-01T23:59:59.000Z
In this thesis we examine two different rateless codes and create a rateless unequal error protection code, all for the additive white Gaussian noise (AWGN) channel. The two rateless codes are examined through both analysis ...
An Approximation Algorithm for Constructing Error Detecting Prefix ...
2006-09-02T23:59:59.000Z
Sep 2, 2006 ... 2-bit Hamming prefix code problem. Our algorithm spends O(n log3 n) time to calculate a 2-bit. Hamming prefix code with an additive error of at ...
Secured Pace Web Server with Collaboration and Error Logging Capabilities
Tao, Lixin
: Secure Sockets Layer (SSL) using the Java Secure Socket Extension (JSSE) API, error logging............................................................................................ 8 Chapter 3 Secure Pace Web Server with SSL........................................................... 29 3.1 Introduction to SSL
Transition state theory: Variational formulation, dynamical corrections, and error estimates
Van Den Eijnden, Eric
Transition state theory: Variational formulation, dynamical corrections, and error estimates Eric, Brazil Received 18 February 2005; accepted 9 September 2005; published online 7 November 2005 Transition which aim at computing dynamical corrections to the TST transition rate constant. The theory
YELLOW SEA ACOUSTIC UNCERTAINTY CAUSED BY HYDROGRAPHIC DATA ERROR
Chu, Peter C.
the littoral and blue waters. After a weapon platform has detected its targets, the sensors on torpedoes, bathymetry, bottom type, and sound speed profiles. Here, the effect of sound speed errors (i.e., hydrographic
Strontium-90 Error Discovered in Subcontract Laboratory Spreadsheet
D. D. Brown A. S. Nagel
1999-07-31T23:59:59.000Z
West Valley Demonstration Project health physicists and environment scientists discovered a series of errors in a subcontractor's spreadsheet being used to reduce data as part of their strontium-90 analytical process.
Sample covariance based estimation of Capon algorithm error probabilities
Richmond, Christ D.
The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...
Sensitivity of OFDM Systems to Synchronization Errors and Spatial Diversity
Zhou, Yi
2012-02-14T23:59:59.000Z
jitter cause inter-carrier interference. The overall system performance in terms of symbol error rate is limited by the inter-carrier interference. For a reliable information reception, compensatory measures must be taken. The second part...
Diagnosing multiplicative error by lensing magnification of type Ia supernovae
Zhang, Pengjie
2015-01-01T23:59:59.000Z
Weak lensing causes spatially coherent fluctuations in flux of type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to measure accurately in auto correlation, its cross correlation with cosmic shear and galaxy distribution in overlapping area can be measured to significantly higher accuracy. Therefore these cross correlations can put useful constraint on multiplicative error, and the obtained constraint is free of cosmic variance in weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with $\\sim 1$ million SNe Ia that can be achieved by the proposed D2k survey with the LSST telescope (Zhan et al. 2008), multiplicative error of $\\sim 0.5\\%$ for source galaxies at $z_s\\sim 1$ can be detected and la...
Model Error Correction for Linear Methods in PET Neuroreceptor Measurements
Renaut, Rosemary
Model Error Correction for Linear Methods in PET Neuroreceptor Measurements Hongbin Guo address: hguo1@asu.edu (Hongbin Guo) Preprint submitted to NeuroImage December 11, 2008 #12;reached. A new
Universally Valid Error-Disturbance Relations in Continuous Measurements
Atsushi Nishizawa; Yanbei Chen
2015-05-31T23:59:59.000Z
In quantum physics, measurement error and disturbance were first naively thought to be simply constrained by the Heisenberg uncertainty relation. Later, more rigorous analysis showed that the error and disturbance satisfy more subtle inequalities. Several versions of universally valid error-disturbance relations (EDR) have already been obtained and experimentally verified in the regimes where naive applications of the Heisenberg uncertainty relation failed. However, these EDRs were formulated for discrete measurements. In this paper, we consider continuous measurement processes and obtain new EDR inequalities in the Fourier space: in terms of the power spectra of the system and probe variables. By applying our EDRs to a linear optomechanical system, we confirm that a tradeoff relation between error and disturbance leads to the existence of an optimal strength of the disturbance in a joint measurement. Interestingly, even with this optimal case, the inequality of the new EDR is not saturated because of doublely existing standard quantum limits in the inequality.
TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit
Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan
Kaeli, David R.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction
Test models for improving filtering with model errors through stochastic parameter estimation
Gershgorin, B. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States); Harlim, J. [Department of Mathematics, North Carolina State University, NC 27695 (United States)], E-mail: jharlim@ncsu.edu; Majda, A.J. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States)
2010-01-01T23:59:59.000Z
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)
2012-07-03T23:59:59.000Z
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Coding Techniques for Error Correction and Rewriting in Flash Memories
Mohammed, Shoeb Ahmed
2010-10-12T23:59:59.000Z
CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2010 Major Subject: Electrical Engineering CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial...
Fault-Tolerant Thresholds for Encoded Ancillae with Homogeneous Errors
Bryan Eastin
2006-11-14T23:59:59.000Z
I describe a procedure for calculating thresholds for quantum computation as a function of error model given the availability of ancillae prepared in logical states with independent, identically distributed errors. The thresholds are determined via a simple counting argument performed on a single qubit of an infinitely large CSS code. I give concrete examples of thresholds thus achievable for both Steane and Knill style fault-tolerant implementations and investigate their relation to threshold estimates in the literature.
Fitting Narrow Spectral Lines in High Energy Astrophysics Using Incompatible Gibbs Samplers
van Dyk, David
for the data degradation processes (van Dyk et al., 2001). Efficient X-ray Spectral Fitting Hierarchical' & $ % Fitting Narrow Spectral Lines in High Energy Astrophysics Using Incompatible Gibbs Samplers Siemiginowska (Harvard-Smithsonian Center for Astrophysics, USA, aneta
Kimbrough, Steven Orla
Using Interactive Evolutionary Computation (IEC) with Validated Surrogate Fitness Functions Evolu- tionary Computation (IEC) is a natural approach here, if practicable. The paper proposes development of Validated Surrogate Fitness (VSF) functions as a workable and gener- alizable form of IEC
Testing Lack-of-Fit of Generalized Linear Models via Laplace Approximation
Glab, Daniel Laurence
2012-07-16T23:59:59.000Z
, the use of noninformative priors produces a new omnibus lack-of-fit statistic. iv We present a thorough numerical study of the proposed test and the various exist- ing orthogonal series-based tests in the context of the logistic regression model. Simula... . . . . . . . . . . . . . . . 13 1.4.1 The Lack-of-Fit Test . . . . . . . . . . . . . . . . . . . . 14 1.4.2 Smoothing-based Tests of Fit . . . . . . . . . . . . . . . . 15 1.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 II TESTS OF FIT FOR LOGISTIC...
Texas at Arlington, University of
Different strategies have been reported for the quantification and isolation of CTCs including polycarbonate on the polycarbonate membranes af
Error-Induced Beam Degradation in Fermilab's Accelerators
Yoon, Phil S.; /Rochester U.
2007-08-01T23:59:59.000Z
In Part I, three independent models of Fermilab's Booster synchrotron are presented. All three models are constructed to investigate and explore the effects of unavoidable machine errors on a proton beam under the influence of space-charge effects. The first is a stochastic noise model. Electric current fluctuations arising from power supplies are ubiquitous and unavoidable and are a source of instabilities in accelerators of all types. A new noise module for generating the Ornstein-Uhlenbeck (O-U) stochastic noise is first created and incorporated into the existing Object-oriented Ring Beam Injection and Tracking (ORBIT-FNAL) package. After being convinced with a preliminary model that the noise, particularly non-white noise, does matter to beam quality, we proceeded to measure directly current ripples and common-mode voltages from all four Gradient Magnet Power Supplies (GMPS). Then, the current signals are Fourier-analyzed. Based upon the power spectra of current signals, we tune up the Ornstein-Uhlnbeck noise model. As a result, we are able to closely match the frequency spectra between current measurements and the modeled O-U stochastic noise. The stochastic noise modeled upon measurements is applied to the Booster beam in the presence of the full space-charge effects. This noise model, accompanied by a suite of beam diagnostic calculations, manifests that the stochastic noise, impinging upon the beam and coupled to the space-charge effects, can substantially enhance the beam degradation process throughout the injection period. The second model is a magnet misalignment model. It is the first time to utilize the latest beamline survey data for building a magnet-by-magnet misalignment model. Given as-found survey fiducial coordinates, we calculate all types of magnet alignment errors (station error, pitch, yaw, roll, twists, etc.) are implemented in the model. We then follow up with statistical analysis to understand how each type of alignment errors are currently distributed around the Booster ring. The ORBIT-FNAL simulations with space charge included show that rolled magnets, in particular, have substantial effects on the Booster beam. This survey-data-based misalignment model can predict how much improvement in machine performance can be achieved if prioritized or selected realignment work is done. In other words, this model can help us investigate different realignment scenarios for the Booster. In addition, by calculating average angular kicks from all misaligned magnets, we expect this misalignment model to serve as guidelines for resetting the strengths of corrector magnets. The third model for the Booster is a time-structured multi-turn injection model. Microbunch-injection scenarios with different time structures are explored in the presence of longitudinal space-charge force. Due to the radio-frequency (RF) bucket mismatch between the Booster and the 400-MeV transferline, RF-phase offsets can be parasitically introduced during the injection process. Using the microbunch multiturn injection, we carry out ESME-ORBIT-combined simulations. This combined simulation allows us to investigate realistic charge-density distribution under full space-charge effects. The growth rates of transverse emittances turned out to be 20 % in both planes. This microbunch-injection scenarios is also applicable to the future 8-GeV Superconducting Linac Proton Driver and the upgraded Main Injector at Fermilab. In Part II, the feasibility of momentum-stacking method of proton beams is investigated. When the Run2 collider program at Fermilab comes to an end around year 2009, the present antiproton source can be available for other purposes. One possible application is to convert the antiproton accumulator to a proton accumulator, so that the beam power from the Main Injector could be enhanced by a factor of four. Through adiabatic processes and optimized parameters of synchrotron motion, we demonstrate with an aid of the ESME code that up to four proton batches can be stacked in the momentum acceptance available for the Accumulator ri
Extending ACNET communication types to include multicast semantics
Neswold, R.; King, C.; /Fermilab
2009-10-01T23:59:59.000Z
In Fermilab's accelerator control system, multicast communication wasn't properly incorporated into ACNET's transport layer, nor in its programming API. We present some recent work that makes multicasts naturally fit in the ACNET network environment. We also show how these additions provide high-availability for ACNET services.
Using Brain Weight to Predict Gestation in Mammals Bivariate Fit of Gestation By Brain Weight
Carriquiry, Alicia
1 Using Brain Weight to Predict Gestation in Mammals Bivariate Fit of Gestation By Brain Weight 0 100 200 300 400 500 Gestation 0 500 1000 1500 BrainWgt Linear Fit (All 50 mammals) Predicted Gestation = 85.248543 + 0.299867 Brain Weight Summary of Fit RSquare 0.372483 RSquare Adj 0.35941 Root Mean
Zanker, Johannes M.
Evolutionary Computation (IEC) has been applied to art and design problems where the fitness of an individual and their consequences for future IEC applications are discussed. Categories and Subject Descriptors J.4 Social Computation (IEC) [6]. IEC thus allows for true phenotypic fitness assessment, where the overall fitness
"Least Squares Fitting" Using Artificial Neural Networks YARON DANON and MARK J. EMBRECHTS
Danon, Yaron
"Least Squares Fitting" Using Artificial Neural Networks YARON DANON and MARK J. EMBRECHTS process changes the internal parameters (weights) of the network such that the neural net can represent a backpropagation fit to various continuous functions will be presented, showing properties of neural network fitted
Maddox, W. Todd
, D.A., Maddox, W.T., & Markman, A.B. (in press). Regulatory Fit Effects in a choice task. PsychonomicMotivation Influences Choice in the Exploration/ Exploitation Dilemma: Regulatory Fit Effects appealing options. In category learning, a regulatory fit has been shown to increase exploration
ASSEMBLY ANALYSIS OF INTERFERENCE FITS IN ELASTIC Kannan Subramanian Edward P. Morse
Paris-Sud XI, Université de
. In this approach, an ideal press-fit type interference assembly is considered initially and solution methodology components, it may be desirable to have a small, but non-zero, interference between the components. Press-fit of the press-fit assemblies. A commercially available finite element analysis package, ANSYS 11.0 [2], has been
A new and efficient error resilient entropy code for image and video compression
Min, Jungki
1999-01-01T23:59:59.000Z
Image and video compression standards such as JPEG, MPEG, H.263 are severely sensitive to errors. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization causes the worst result. Even one bit error...
Error Monitoring: A Learning Strategy for Improving Academic Performance of LD Adolescents
Schumaker, Jean B.; Deshler, Donald D.; Nolan, Susan; Clark, Frances L.; Alley, Gordon R.; Warner, Michael M.
1981-04-01T23:59:59.000Z
Error monitoring, a learning strategy for detecting and correcting errors in written products, was taught to nine learning disabled adolescents. Students could detect and correct more errors after they received training ...
Assessing the Impact of Differential Genotyping Errors on Rare Variant Tests of Association
Fast, Shannon Marie
Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential ...
Clark, E.L.
1993-08-01T23:59:59.000Z
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
SHEAN (Simplified Human Error Analysis code) and automated THERP
Wilson, J.R.
1993-06-01T23:59:59.000Z
One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN.
Theoretical inputs and errors in the new hadronic currents in TAUOLA
Roig, P.; Nugent, I. M.; Przedzinski, T.; Shekhovtsova, O.; Was, Z. [Grup de Fisica Teorica, Institut de Fisica d'Altes Energies, Universitat Autonoma de Barcelona, E-08193 Bellaterra, Barcelona (Spain); RWTH Aachen University, III. Physikalisches Institut B, Aachen (Germany); Faculty of Physics, Astronomy and Applied Computer Science, Jagellonian University, Reymonta 4, 30-059 Cracow, Poland and Institute of Nuclear Physics, PAN, Cracow, ul. Radzikowskiego 152 (Poland); IFIC, Universitat de Valencia-CSIC, Apt. Correus 22085, E-46071, Valencia (Spain); CERN PH-TH, CH-1211 Geneva 23, Switzerland and Institute of Nuclear Physics, PAN, Cracow, ul. Radzikowskiego 152 (Poland)
2012-10-23T23:59:59.000Z
The new hadronic currents implemented in the TAUOLA library are obtained in the unified and consistent framework of Resonance Chiral Theory: a Lagrangian approach in which the resonances exchanged in the hadronic tau decays are active degrees of freedom included in a way that reproduces the low-energy results of Chiral Perturbation Theory. The short-distance QCD constraints on the imaginary part of the spin-one correlators yield relations among the couplings that render the theory predictive. In this communication, the obtaining of the two- and three-meson form factors is sketched. One of the criticisms to our framework is that the error may be as large as 1/3, since it is a realization of the large-N{sub C} limit of QCD in a meson theory. A number of arguments are given which disfavor that claim pointing to smaller errors, which would explain the phenomenological success of our description in these decays. Finally, other minor sources of error and current improvements of the code are discussed.
Updated User's Guide for Sammy: Multilevel R-Matrix Fits to Neutron Data Using Bayes' Equations
Larson, Nancy M [ORNL
2008-10-01T23:59:59.000Z
In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron-induced cross section data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has evolved to the point where it is now in use around the world for analysis of many different types of data. SAMMY is not limited to incident neutrons but can also be used for incident protons, alpha particles, or other charged particles; likewise, Coulomb exit hannels can be included. Corrections for a wide variety of experimental conditions are available in the code: Doppler and resolution broadening, multiple-scattering corrections for capture or reaction yields, normalizations and backgrounds, to name but a few. The fitting procedure is Bayes' method, and data and parameter covariance matrices are properly treated within the code. Pre- and post-processing capabilities are also available, including (but not limited to) connections with the Evaluated Nuclear Data Files. Though originally designed for use in the resolved resonance region, SAMMY also includes a treatment for data analysis in the unresolved resonance region.
Development of an integrated system for estimating human error probabilities
Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.
1998-12-01T23:59:59.000Z
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.
Non-Gaussian numerical errors versus mass hierarchy
Y. Meurice; M. B. Oktay
2000-05-12T23:59:59.000Z
We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.
Reducing Collective Quantum State Rotation Errors with Reversible Dephasing
Kevin C. Cox; Matthew A. Norcia; Joshua M. Weiner; Justin G. Bohnet; James K. Thompson
2014-07-16T23:59:59.000Z
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of $2.1 \\times 10^5$ laser cooled and trapped $^{87}$Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Meta learning of bounds on the Bayes classifier error
Moon, Kevin R; Hero, Alfred O
2015-01-01T23:59:59.000Z
Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.
Henry L. Haselgrove; Peter P. Rohde
2007-07-03T23:59:59.000Z
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected with high probability by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the quantum coherent information. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU
Haque, Imran S
2009-01-01T23:59:59.000Z
Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...
Peak, Derek
Are you getting an error message in UniFi Plus? (suggestion...check the auto-hint line!) In most cases, Unifi Plus does not prominently display error messages; instead, the error message and processing messages Keyboard shortcuts Instructions for accessing other blocks, windows or forms from
Error estimates and specification parameters for functional renormalization
Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)
2013-07-15T23:59:59.000Z
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned
Frank Marhauser
2011-09-01T23:59:59.000Z
Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
Guth, Larry, E-mail: lguth@math.mit.edu [Department of Mathematics, MIT, Cambridge, Massachusetts 02139 (United States); Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il [Institute of Mathematics, Hebrew University, Jerusalem 91904 (Israel)
2014-08-15T23:59:59.000Z
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ?}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
Full protection of superconducting qubit systems from coupling errors
M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley
2005-08-09T23:59:59.000Z
Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.
Laser Phase Errors in Seeded Free Electron Lasers
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-04-17T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
Correctable noise of Quantum Error Correcting Codes under adaptive concatenation
Jesse Fern
2008-02-27T23:59:59.000Z
We examine the transformation of noise under a quantum error correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum fault tolerant thresholds.
Bozkaya, U?ur, E-mail: ugur.bozkaya@atauni.edu.tr [Department of Chemistry, Atatürk University, Erzurum 25240, Turkey and Department of Chemistry, Middle East Technical University, Ankara 06800 (Turkey)
2014-09-28T23:59:59.000Z
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller–Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.
Global Fits of the Minimal Universal Extra Dimensions Scenario
Bertone, Gianfranco; /Zurich U. /Paris, Inst. Astrophys.; Kong, Kyoungchul; /SLAC /Kansas U.; de Austri, Roberto Ruiz; /Valencia U., IFIC; Trotta, Roberto; /Imperial Coll., London
2012-06-22T23:59:59.000Z
In theories with Universal Extra-Dimensions (UED), the {gamma}{sub 1} particle, first excited state of the hypercharge gauge boson, provides an excellent Dark Matter (DM) candidate. Here we use a modified version of the SuperBayeS code to perform a Bayesian analysis of the minimal UED scenario, in order to assess its detectability at accelerators and with DM experiments. We derive in particular the most probable range of mass and scattering cross sections off nucleons, keeping into account cosmological and electroweak precision constraints. The consequences for the detectability of the {gamma}{sub 1} with direct and indirect experiments are dramatic. The spin-independent cross section probability distribution peaks at {approx} 10{sup -11} pb, i.e. below the sensitivity of ton-scale experiments. The spin-dependent cross-section drives the predicted neutrino flux from the center of the Sun below the reach of present and upcoming experiments. The only strategy that remains open appears to be direct detection with ton-scale experiments sensitive to spin-dependent cross-sections. On the other hand, the LHC with 1 fb{sup -1} of data should be able to probe the current best-fit UED parameters.
Chatzopoulos, E.; Wheeler, J. Craig; Vinko, J. [Department of Astronomy, University of Texas at Austin, Austin, TX (United States); Horvath, Z. L.; Nagy, A., E-mail: manolis@astro.as.utexas.edu [Department of Optics and Quantum Electronics, University of Szeged (Hungary)
2013-08-10T23:59:59.000Z
We present fits of generalized semi-analytic supernova (SN) light curve (LC) models for a variety of power inputs including {sup 56}Ni and {sup 56}Co radioactive decay, magnetar spin-down, and forward and reverse shock heating due to supernova ejecta-circumstellar matter (CSM) interaction. We apply our models to the observed LCs of the H-rich superluminous supernovae (SLSN-II) SN 2006gy, SN 2006tf, SN 2008am, SN 2008es, CSS100217, the H-poor SLSN-I SN 2005ap, SCP06F6, SN 2007bi, SN 2010gx, and SN 2010kd, as well as to the interacting SN 2008iy and PTF 09uj. Our goal is to determine the dominant mechanism that powers the LCs of these extraordinary events and the physical conditions involved in each case. We also present a comparison of our semi-analytical results with recent results from numerical radiation hydrodynamics calculations in the particular case of SN 2006gy in order to explore the strengths and weaknesses of our models. We find that CS shock heating produced by ejecta-CSM interaction provides a better fit to the LCs of most of the events we examine. We discuss the possibility that collision of supernova ejecta with hydrogen-deficient CSM accounts for some of the hydrogen-deficient SLSNe (SLSN-I) and may be a plausible explanation for the explosion mechanism of SN 2007bi, the pair-instability supernova candidate. We characterize and discuss issues of parameter degeneracy.
Heat Pump Water Heaters and American Homes: A Good Fit?
Franco, Victor
2011-01-01T23:59:59.000Z
Central Air Conditioners and Heat Pumps Including. May,pump technology to extract heat from the surrounding air (air flow requirements of HPWHs increase installation costs. Introduction A heat pump
Soft Error Modeling and Protection for Sequential Elements Hossein Asadi and Mehdi B. Tahoori
on system-level soft error rate. The number of clock cycles required for an error in a bistable to be propagated to system outputs is used to measure the vulnerability of bistables to soft errors. 1 Introduction, soft errors become the main reliability concern during lifetime operation of digital systems. Soft
Low-Cost Hardening of Image Processing Applications Against Soft Errors Ilia Polian1,2
Polian, Ilia
, and their hardening against soft errors becomes an issue. We propose a methodology to identify soft errors as uncritical based on their impact on the system's functionality. We call a soft error uncritical if its impact are imperceivable for the human user of the system. We focus on soft errors in the motion esti- mation subsystem
Distinguishing congestion and error losses: an ECN/ELN based scheme
Kamakshisundaram, Raguram
2001-01-01T23:59:59.000Z
error rates, like wireless links, packets are lost more due to error than due to congestion. But TCP does not differentiate between error and congestion losses and hence reduces the sending rate for losses due to error also, which unnecessarily reduces...
Designing Automation to Reduce Operator Errors Nancy G. Leveson
Leveson, Nancy
Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of modeÂrelated problems [SW95]. After studying accidents and incidents in the new, highly automated
Measurement Errors in Visual Servoing V. Kyrki ,1
Kragic, Danica
feedback for closed loop control of a robot motion termed visual servoing has received a significant amount robot trajectory and its uncertainty. The procedures of camera calibration have improved enormously over on the modeling of an error function and thus has a major effect on the robot's trajectory. On the other hand
Effects of errors in the solar radius on helioseismic inferences
Sarbani Basu
1997-12-09T23:59:59.000Z
Frequencies of intermediate-degree f-modes of the Sun seem to indicate that the solar radius is smaller than what is normally used in constructing solar models. We investigate the possible consequences of an error in radius on results for solar structure obtained using helioseismic inversions. It is shown that solar sound speed will be overestimated if oscillation frequencies are inverted using reference models with a larger radius. Using solar models with radius of 695.78 Mm and new data sets, the base of the solar convection zone is estimated to be at radial distance of $0.7135\\pm 0.0005$ of the solar radius. The helium abundance in the convection zone as determined using models with OPAL equation of state is $0.248\\pm 0.001$, where the errors reflect the estimated systematic errors in the calculation, the statistical errors being much smaller. Assuming that the OPAL opacities used in the construction of the solar models are correct, the surface $Z/X$ is estimated to be $0.0245\\pm 0.0006$.
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01T23:59:59.000Z
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
Two infinite families of nonadditive quantum error-correcting codes
Sixia Yu; Qing Chen; C. H. Oh
2009-01-14T23:59:59.000Z
We construct explicitly two infinite families of genuine nonadditive 1-error correcting quantum codes and prove that their coding subspaces are 50% larger than those of the optimal stabilizer codes of the same parameters via the linear programming bound. All these nonadditive codes can be characterized by a stabilizer-like structure and thus their encoding circuits can be designed in a straightforward manner.
Threshold error rates for the toric and surface codes
D. S. Wang; A. G. Fowler; A. M. Stephens; L. C. L. Hollenberg
2009-05-05T23:59:59.000Z
The surface code scheme for quantum computation features a 2d array of nearest-neighbor coupled qubits yet claims a threshold error rate approaching 1% (NJoP 9:199, 2007). This result was obtained for the toric code, from which the surface code is derived, and surpasses all other known codes restricted to 2d nearest-neighbor architectures by several orders of magnitude. We describe in detail an error correction procedure for the toric and surface codes, which is based on polynomial-time graph matching techniques and is efficiently implementable as the classical feed-forward processing step in a real quantum computer. By direct simulation of this error correction scheme, we determine the threshold error rates for the two codes (differing only in their boundary conditions) for both ideal and non-ideal syndrome extraction scenarios. We verify that the toric code has an asymptotic threshold of p = 15.5% under ideal syndrome extraction, and p = 7.8 10^-3 for the non-ideal case, in agreement with prior work. Simulations of the surface code indicate that the threshold is close to that of the toric code.
RESIDUAL TYPE A POSTERIORI ERROR ESTIMATES FOR ELLIPTIC OBSTACLE PROBLEMS
Nochetto, Ricardo H.
to double obstacle problems are briefly discussed. Key words. a posteriori error estimates, residual Science Foundation under the grant No.19771080 and China National Key Project ``Large Scale Scientific\\Gamma satisfies / Å¸ 0 on @ and K is the convex set of admissible displacements K := fv 2 H 1 0(\\Omega\\Gamma : v
Multilayer Perceptron Error Surfaces: Visualization, Structure and Modelling
Gallagher, Marcus
. This is commonly formulated as a multivariate nonÂlinear optimization problem over a very highÂdimensional space of analysis are not wellÂsuited to this problem. Visualizing and describÂ ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method
Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling
Gallagher, Marcus
. This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of analysis are not well-suited to this problem. Visualizing and describ- ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method
Analysis of possible systematic errors in the Oslo method
A. C. Larsen; M. Guttormsen; M. Krticka; E. Betak; A. Bürger; A. Görgen; H. T. Nyhus; J. Rekstad; A. Schiller; S. Siem; H. K. Toft; G. M. Tveten; A. V. Voinov; K. Wikan
2012-11-27T23:59:59.000Z
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of level density and gamma-ray transmission coefficient from a set of particle-gamma coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller
Xuan, Dong
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller , Nagarjuna and Computer Engineering The Ohio State University {millerti,teodores}@cse.ohio-state.edu, nagarjun. To deal with these com- peting trends, energy-efficient solutions are needed to deal with reli- ability
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering
Kreinovich, Vladik
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work
A Method for Treating Discretization Error in Nondeterministic Analysis
Alvin, K.F.
1999-01-27T23:59:59.000Z
A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.
Considering Workload Input Variations in Error Coverage Estimation
Karlsson, Johan
different parts of the workload code to be executed different number of times. By using the results from in the workload input when estimating error detection coverage using fault injection are investigated. Results sequence based on results from fault injection experiments with another input sequence is presented
Data aware, Low cost Error correction for Wireless Sensor Networks
California at San Diego, University of
Data aware, Low cost Error correction for Wireless Sensor Networks Shoubhik Mukhopadhyay, Debashis challenges in adoption and deployment of wireless networked sensing applications is ensuring reliable sensor of such applications. A wireless sensor network is inherently vulnerable to different sources of unreliability
Error Minimization Methods in Biproportional Apportionment Federica Ricca Andrea Scozzari
Serafini, Paolo
as an alternative to the classical axiomatic approach introduced by Balinski and Demange in 1989. We provide and in the statistical literature. A milestone theoretical setting was given by Balinski and Demange in 1989 [5, 6 a class of methods for Biproportional Apportionment characterized by an "error minimization" approach
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Sambridge, Malcolm
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library
Error Exponent for Discrete Memoryless Multiple-Access Channels
Anastasopoulos, Achilleas
Error Exponent for Discrete Memoryless Multiple-Access Channels by Ali Nazari A dissertation Bayraktar Associate Professor Jussi Keppo #12;c Ali Nazari 2011 All Rights Reserved #12;To my parents. ii Becky Turanski, Nancy Goings, Michele Feldkamp, Ann Pace, Karen Liska and Beth Lawson for efficiently
Time reversal in thermoacoustic tomography - an error estimate
Hristova, Yulia
2008-01-01T23:59:59.000Z
The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming
Waterloo, University of
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming Babak Alipanahi1 automatically picked peaks. IPASS is proposed as a novel integer linear programming (ILP) based assignment assignment method. Although a variety of assignment approaches have been developed, none works well on noisy
Research Article Preschool Speech Error Patterns Predict Articulation
-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological Outcomes in Children With Histories of Speech Sound Disorders Jonathan L. Preston,a,b Margaret Hull disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method
Edinburgh Research Explorer Prevalence and Causes of Prescribing Errors
Hall, Christopher
of Prescribing Errors: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study Cristi´n Ryan1 , Sarah Kingdom, 7 Health Psychology, University of Aberdeen, Aberdeen, United Kingdom, 8 Clinical Pharmacology
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)
1997-01-01T23:59:59.000Z
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
Achievable Error Exponents for the Private Fingerprinting Game
Merhav, Neri
Achievable Error Exponents for the Private Fingerprinting Game Anelia Somekh-Baruch and Neri Merhav a forgery of the data while aiming at erasing the fingerprints in order not to be detected. Their action have presented and analyzed a game-theoretic model of private2 fingerprinting systems in the presence
RESOLVE Upgrades for on Line Lattice Error Analysis
Lee, M.; Corbett, J.; White, G.; /SLAC; Zambre, Y.; /Unlisted
2011-08-25T23:59:59.000Z
We have increased the speed and versatility of the orbit analysis process by adding a command file, or 'script' language, to RESOLVE. This command file feature enables us to automate data analysis procedures to detect lattice errors. We describe the RESOLVE command file and present examples of practical applications.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration
This paper addresses the problem of rejecting interfer- ence due to secondary specular reflections, cross structure, acquisition delay, lack of error recovery, and incorrect modelling of measurement noise. We cause secondary reflections, edges and textures may have a stripe-like appearance, and cross-talk can
Error Control Based Model Reduction for Parameter Optimization of Elliptic
of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material
Development of an Expert System for Classification of Medical Errors
Kopec, Danny
in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors
Odometry Error Covariance Estimation for Two Wheel Robot Vehicles
Robotics Research Centre Department of Electrical and Computer Systems Engineering Monash University Technical Report MECSE-95-1 1995 ABSTRACT This technical report develops a simple statistical error model of the robot. Other paths can be composed of short segments of constant curvature arcs without great loss
Quantum computing with nearest neighbor interactions and error rates over 1%
David S. Wang; Austin G. Fowler; Lloyd C. L. Hollenberg
2010-09-20T23:59:59.000Z
Large-scale quantum computation will only be achieved if experimentally implementable quantum error correction procedures are devised that can tolerate experimentally achievable error rates. We describe a quantum error correction procedure that requires only a 2-D square lattice of qubits that can interact with their nearest neighbors, yet can tolerate quantum gate error rates over 1%. The precise maximum tolerable error rate depends on the error model, and we calculate values in the range 1.1--1.4% for various physically reasonable models. Even the lowest value represents the highest threshold error rate calculated to date in a geometrically constrained setting, and a 50% improvement over the previous record.
Nuclear Arms Control R&D Consortium includes Los Alamos
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nuclear Arms Control R&D Consortium includes Los Alamos Nuclear Arms Control R&D Consortium includes Los Alamos A consortium led by the University of Michigan that includes LANL as...
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications
Foo, Jasmine; Wan Xiaoliang [Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912 (United States); Karniadakis, George Em [Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912 (United States)], E-mail: gk@dam.brown.edu
2008-11-20T23:59:59.000Z
Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.
Results of a nuclear power plant Application of a new technique for human error analysis (ATHEANA)
Forester, J.A.; Whitehead, D.W.; Kolaczkowski, A.M.; Thompson, C.M.
1997-10-01T23:59:59.000Z
A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the {open_quotes}success{close_quotes} of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator {open_quotes}on shift{close_quotes} until a few months before the demonstration. The demonstration was conducted over a 5 month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.
SU-E-T-152: Error Sensitivity and Superiority of a Protocol for 3D IMRT Quality Assurance
Gueorguiev, G [Massachusetts General Hospital, Boston, MA (United States); University of Massachusetts Lowell, Lowell, MA (United States); Cotter, C; Turcotte, J; Sharp, G; Crawford, B [Massachusetts General Hospital, Boston, MA (United States); Mah'D, M [University of Massachusetts Lowell, Lowell, MA (United States)
2014-06-01T23:59:59.000Z
Purpose: To test if the parameters included in our 3D QA protocol with current tolerance levels are able to detect certain errors and show the superiority of 3D QA method over single ion chamber measurements and 2D gamma test by detecting most of the introduced errors. The 3D QA protocol parameters are: TPS and measured average dose difference, 3D gamma test with 3mmDTA/3% test parameters, and structure volume for which the TPS predicted and measured absolute dose difference is greater than 6%. Methods: Two prostate and two thoracic step-and-shoot IMRT patients were investigated. The following errors were introduced to each original treatment plan: energy switched from 6MV to 10MV, linac jaws retracted to 15cmx15cm, 1,2,3 central MLC leaf pairs retracted behind the jaws, single central MLC leaf put in or out of the treatment field, Monitor Units (MU) increased and decreased by 1 and 3%, collimator off by 5 and 15 degrees, detector shifted by 5mm to the left and right, gantry treatment angle off by 5 and 15 degrees. QA was performed on each plan using single ion chamber, 2D ion chamber array for 2D gamma analysis and using IBA's COMPASS system for 3D QA. Results: Out of the three tested QA methods single ion chamber performs the worst not detecting subtle errors. 3D QA proves to be the superior out of the three methods detecting all of introduced errors, except 10MV and 1% MU change, and MLC rotated (those errors were not detected by any QA methods tested). Conclusion: As the way radiation is delivered evolves, so must the QA. We believe a diverse set of 3D statistical parameters applied both to OAR and target plan structures provides the highest level of QA.
Use of event-level neutrino telescope data in global fits for theories of new physics
Scott, P. [Dept. of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 (Canada); Savage, C.; Edsjö, J. [Oskar Klein Centre for Cosmoparticle Physics and Dept. of Physics, Stockholm University, SE-10691 Stockholm (Sweden); Abbasi, R.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Baker, M. [Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI 53706 (United States); Abdou, Y. [Dept. of Physics and Astronomy, University of Gent, B-9000 Gent (Belgium); Ackermann, M. [DESY, D-15735 Zeuthen (Germany); Adams, J. [Dept. of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch (New Zealand); Aguilar, J.A. [Département de physique nucléaire et corpusculaire, Université de Genève, CH-1211 Genève (Switzerland); Altmann, D. [Institut für Physik, Humboldt-Universität zu Berlin, D-12489 Berlin (Germany); Bai, X. [Bartol Research Institute and Department of Physics and Astronomy, University of Delaware, Newark, DE 19716 (United States); Barwick, S.W. [Dept. of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Baum, V. [Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz (Germany); Bay, R. [Dept. of Physics, University of California, Berkeley, CA 94720 (United States); Beattie, K. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Beatty, J.J. [Dept. of Physics and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Bechet, S., E-mail: patscott@physics.mcgill.ca, E-mail: danning@fysik.su.se, E-mail: savage@fysik.su.se [Université Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels (Belgium); and others
2012-11-01T23:59:59.000Z
We present a fast likelihood method for including event-level neutrino telescope data in parameter explorations of theories for new physics, and announce its public release as part of DarkSUSY 5.0.6. Our construction includes both angular and spectral information about neutrino events, as well as their total number. We also present a corresponding measure for simple model exclusion, which can be used for single models without reference to the rest of a parameter space. We perform a number of supersymmetric parameter scans with IceCube data to illustrate the utility of the method: example global fits and a signal recovery in the constrained minimal supersymmetric standard model (CMSSM), and a model exclusion exercise in a 7-parameter phenomenological version of the MSSM. The final IceCube detector configuration will probe almost the entire focus-point region of the CMSSM, as well as a number of MSSM-7 models that will not otherwise be accessible to e.g. direct detection. Our method accurately recovers the mock signal, and provides tight constraints on model parameters and derived quantities. We show that the inclusion of spectral information significantly improves the accuracy of the recovery, providing motivation for its use in future IceCube analyses.
A Roadmap to Success: Hiring, Retaining, and Including People...
Broader source: Energy.gov (indexed) [DOE]
A Roadmap to Success: Hiring, Retaining, and Including People with Disabilities A Roadmap to Success: Hiring, Retaining, and Including People with Disabilities December 5, 2014...
[Article 1 of 7: Motivates and Includes the Consumer
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and include the consumer exist. Some examples include advanced two-way metering (AMI), demand response (DR), and distributed energy resources (DER). A common misconception is...
Including Retro-Commissioning in Federal Energy Savings Performance...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Including Retro-Commissioning in Federal Energy Savings Performance Contracts Including Retro-Commissioning in Federal Energy Savings Performance Contracts Document describes...
Investigations into the Nature of Halogen Bonding Including Symmetry...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
into the Nature of Halogen Bonding Including Symmetry Adapted Perturbation Theory Analyses. Investigations into the Nature of Halogen Bonding Including Symmetry Adapted...
Data Fitting in Partial Differential Algebraic Equations: Some Academic and Industrial
Schittkowski, Klaus
dynamics of hydro systems, · MCFC fuel cells, · horn radiators for satellite communication. The dynamical engineering. Key words: parameter estimation, data fitting, least squares optimization, partial differential
A simple extension of two-phase characteristic curves to include the dry region
WEBB,STEPHEN W.
2000-01-25T23:59:59.000Z
Two-phase characteristic curves are necessary for the simulation of water and vapor flow in porous media. Existing functions such as van Genuchten, Brooks and Corey, and Luckner et al. have significant limitations in the dry region as the liquid saturation goes to zero. This region, which is important in a number of applications including liquid and vapor flow and vapor-solid sorption, has been the subject of a number of previous investigations. Most previous studies extended standard capillary pressure curves into the adsorption region to zero water content and required a refitting of the revised curves to the data. In contrast, the present method provides for a simple extension of existing capillary pressure curves without the need to refit the experimental data. Therefore, previous curve fits can be used, and the transition between the existing fit and the relationship in the adsorption region is easily calculated. The data-model comparison shows good agreement. This extension is a simple and convenient way to extend existing curves to the dry region.
Including robustness in multi-criteria optimization for intensity-modulated proton therapy
Chen, Wei; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David
2011-01-01T23:59:59.000Z
We present a method to include robustness into a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios. The optimization method is based on a linear...
System Losses Study - FIT (Fuel-cycle Integration and Tradeoffs)
Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Robert S. Cherry; Denia Djokic; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Melissa C. Teague; Gregory M. Teske; Kurt G. Vedros
2010-09-01T23:59:59.000Z
This team aimed to understand the broad implications of changes of operating performance and parameters of a fuel cycle component on the entire system. In particular, this report documents the study of the impact of changing the loss of fission products into recycled fuel and the loss of actinides into waste. When the effort started in spring 2009, an over-simplified statement of the objective was “the number of nines” – how would the cost of separation, fuel fabrication, and waste management change as the number of nines of separation efficiency changed. The intent was to determine the optimum “losses” of TRU into waste for the single system that had been the focus of the Global Nuclear Energy Program (GNEP), namely sustained recycle in burner fast reactors, fed by transuranic (TRU) material recovered from used LWR UOX-51 fuel. That objective proved to be neither possible (insufficient details or attention to the former GNEP options, change in national waste management strategy from a Yucca Mountain focus) nor appropriate given the 2009-2010 change to a science-based program considering a wider range of options. Indeed, the definition of “losses” itself changed from the loss of TRU into waste to a generic definition that a “loss” is any material that ends up where it is undesired. All streams from either separation or fuel fabrication are products; fuel feed streams must lead to fuels with tolerable impurities and waste streams must meet waste acceptance criteria (WAC) for one or more disposal sites. And, these losses are linked in the sense that as the loss of TRU into waste is reduced, often the loss or carryover of waste into TRU or uranium is increased. The effort has provided a mechanism for connecting these three Campaigns at a technical level that had not previously occurred – asking smarter and smarter questions, sometimes answering them, discussing assumptions, identifying R&D needs, and gaining new insights. The FIT model has been a forcing function, helping the team in this endeavor. Models don’t like “TBD” as an input, forcing us to make assumptions and see if they matter. A major addition in FY 2010 was exploratory analysis of “modified open fuel” cycles, employing “minimum fuel treatment” as opposed to full aqueous or electrochemical separation treatment. This increased complexity in our analysis and analytical tool development because equilibrium conditions do not appear sustainable in minimum fuel treatment cases, as was assumed in FY 2009 work with conventional aqueous and electrochemical separation. It is no longer reasonable to assume an equilibrium situation exists in all cases.
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)
2010-08-24T23:59:59.000Z
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01T23:59:59.000Z
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
On the efficiency of nondegenerate quantum error correction codes for Pauli channels
Gunnar Bjork; Jonas Almlof; Isabel Sainz
2009-05-19T23:59:59.000Z
We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.
Scaling behavior of discretization errors in renormalization and improvement constants
Bhattacharya, T; Lee, W; Sharpe, S R; Bhattacharya, Tanmoy; Gupta, Rajan; Lee, Weonjong; Sharpe, Stephen R.
2006-01-01T23:59:59.000Z
Non-perturbative results for improvement and renormalization constants needed for on-shell and off-shell O(a) improvement of bilinear operators composed of Wilson fermions are presented. The calculations have been done in the quenched approximation at beta=6.0, 6.2 and 6.4. To quantify residual discretization errors we compare our data with results from other non-perturbative calculations and with one-loop perturbation theory.
Error message recording and reporting in the SLC control system
Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.
1985-04-01T23:59:59.000Z
Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.
Error message recording and reporting in the SLC control system
Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.
1985-10-01T23:59:59.000Z
Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.
Topics in measurement error and missing data problems
Liu, Lian
2009-05-15T23:59:59.000Z
reasons. In this research, the impact of missing genotypes is investigated for high resolution combined linkage and association mapping of quantitative trait loci (QTL). We assume that the genotype data are missing completely at random (MCAR). Two... and asymptotic properties. In the genetics study, a new method is proposed to account for the missing genotype in a combined linkage and association study. We have concluded that this method does not improve power but it will provide better type I error rates...
Lined sampling vessel including a filter to separate solids from liquids on exit
Shurtliff, Rodney M. (Idaho Falls, ID); Klingler, Kerry M. (Idaho Falls, ID); Turner, Terry D. (Ammon, ID)
2001-01-01T23:59:59.000Z
A filtering apparatus has an open canister with an inlet port. A canister lid is provided which includes an outlet port for the passage of fluids from the canister. Liners are also provided which are shaped to fit the interiors of the canister and the lid, with at least the canister liner preferably being flexible. The sample to be filtered is positioned inside the canister liner, with the lid and lid liner being put in place thereafter. A filter element is located between the sample and the outlet port. Seals are formed between the canister liner and lid liner, and around the outlet port to prevent fluid leakage. A pressure differential is created between the canister and the canister liner so that the fluid in the sample is ejected from the outlet port and the canister liner collapses around the retained solids.
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29T23:59:59.000Z
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
Goal pursuit is more than planning: the moderating role of regulatory fit
Tam, Wing Yin Leona
2006-10-30T23:59:59.000Z
Research indicates that planning helps consumers in their goal pursuit, but little is known about how and when such beneficial effects change with regulatory fit Ã¢Â?Â? fit between consumersÃ¢Â?Â? regulatory orientation and goal pursuit means...
Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park
Wolfe, Patrick J.
Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park Department of Statistics, Harvard University October 25, 2005 Taeyoung Park Fitting Narrow Emission Lines in X X-ray luminosity, and the emission of photons with energies is represented by a spectrum
SHRINK-FITTING AND DOWEL WELDING IN MORTISE AND TENON STRUCTURAL WOOD JOINTS
Paris-Sud XI, Université de
SHRINK-FITTING AND DOWEL WELDING IN MORTISE AND TENON STRUCTURAL WOOD JOINTS E.Mougel1 , C.Segovia1 welded dowels. Increasing the number of welded dowels, however, produced joints of higher strength than those bonded just by shrink-fitting. Combining in the same joint both dowel welding and shrink
Split Bregman Method for Minimization of Region-Scalable Fitting Energy for Image
Soatto, Stefano
Split Bregman Method for Minimization of Region-Scalable Fitting Energy for Image Segmentation, The Ohio State University, OH 43202, U.S. b Department of Mathematics, Harbin Institute of Technology convex segmenta- tion method and the split Bregman technique into the region-scalable fitting energy
The Mimicking Octopus: Towards a one-size-fits-all Database Architecture
The Mimicking Octopus: Towards a one-size-fits-all Database Architecture Alekh Jindal Supervised by anyways. In this paper we discuss building a new type of database system which fits several use started off as monolithic systems. However, database engineers soon started tuning their performance
Triantaphyllou, Evangelos
Prediction of Diabetes by Employing a New Data Mining Approach Which Balances Fitting, Evangelos, "Prediction of Diabetes by Employing a New Data Mining Approach Which Balances Fitting disease) is called diabetes. The cause of diabetes is still a mystery, although obesity and lack
Body Fitted Grid Generation Method with Moving Boundaries and Its Application for analysis of MEMS
Tentzeris, Manos
Body Fitted Grid Generation Method with Moving Boundaries and Its Application for analysis of MEMS these MEMS devices using body fitted grid generation method with moving boundaries is proposed. This technique is based on the finite-difference time-domain (FD-TD) method and a kind of grid generation
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)
2014-06-01T23:59:59.000Z
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the use of probabilistic models into clinical chart checking for improved detection of potential errors in RT plans.
acid analysis including: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nairn, John A. 12 A bottom-up analysis of including aviation within theEU's Emissions Trading Scheme Geosciences Websites Summary: A bottom-up analysis of including aviation...
analysis including quantification: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Ausloos 2004-12-31 29 A bottom-up analysis of including aviation within theEU's Emissions Trading Scheme Geosciences Websites Summary: A bottom-up analysis of including aviation...
Biomarkers Core Lab Price List Does NOT Include
Grishok, Alla
v3102014 Biomarkers Core Lab Price List Does NOT Include Kit Cost PURCHASED by INVESTIGATOR/1/2013 Page 1 of 5 #12;Biomarkers Core Lab Price List Does NOT Include Kit Cost PURCHASED by INVESTIGATOR
Example Retro-Commissioning Scope of Work to Include Services...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Retro-Commissioning Scope of Work to Include Services as Part of an ESPC Investment-Grade Audit Example Retro-Commissioning Scope of Work to Include Services as Part of an ESPC...
Meyer, Jeff, E-mail: jmeye3@utsouthwestern.ed [University of Texas-M.D. Anderson Cancer Center, Houston, TX (United States); Bluett, Jaques; Amos, Richard [University of Texas-M.D. Anderson Cancer Center, Houston, TX (United States)
2010-10-01T23:59:59.000Z
Purpose: Conventional proton therapy with passively scattered beams is used to treat a number of tumor sites, including prostate cancer. Spot scanning proton therapy is a treatment delivery means that improves conformal coverage of the clinical target volume (CTV). Placement of individual spots within a target is dependent on traversed tissue density. Errors in patient alignment perturb dose distributions. Moreover, there is a need for a rational planning approach that can mitigate the dosimetric effect of random alignment errors. We propose a treatment planning approach and then analyze the consequences of various simulated alignment errors on prostate treatments. Methods and Materials: Ten control patients with localized prostate cancer underwent treatment planning for spot scanning proton therapy. After delineation of the clinical target volume, a scanning target volume (STV) was created to guide dose coverage. Errors in patient alignment in two axes (rotational and yaw) as well as translational errors in the anteroposterior direction were then simulated, and dose to the CTV and normal tissues were reanalyzed. Results: Coverage of the CTV remained high even in the setting of extreme rotational and yaw misalignments. Changes in the rectum and bladder V45 and V70 were similarly minimal, except in the case of translational errors, where, as a result of opposed lateral beam arrangements, much larger dosimetric perturbations were observed. Conclusions: The concept of the STV as applied to spot scanning radiation therapy and as presented in this report leads to robust coverage of the CTV even in the setting of extreme patient misalignments.
Super-Kamiokande hep neutrino best fit: a possible signal of nonmaxwellian solar plasma
Massimo Coraddu; Marcello Lissia; Giuseppe Mezzorani; Piero Quarati
2002-12-03T23:59:59.000Z
The Super-Kamiokande best global fit, which includes data from SNO, Gallium and Chlorine experiments, results in a hep neutrino contribution to the signals that, even after oscillation, is greater than the SSM prediction. The solar hep neutrino flux that would yield this contribution is four times larger than the one predicted by the SSM. Recent detailed calculations exclude that the astrophysical factor S_{hep}(0) could be wrong by such a large factor. Given the reliability of the temperature and densities profiles inside the Sun, this experimental result indicates that plasma effects are important for this reaction. We show that a slight enhancement of the high-energy tail, enhancement that is of the order of the deviations from the Maxwell-Boltzmann distribution expected in the solar core plasma, produces an increment of the hep rate of the magnitude required. We verified that the other neutrino fluxes remain compatible with experimental signals and SSM predictions. Better measurements of the high-energy tail of the neutrino spectrum would improve our understanding of reaction rates in the solar plasma.
A Rediatively Light Stop Saves the Best Global Fit for Higgs Boson Mass and Decays
Zhaofeng Kang; Tianjun Li; Jinmian Li; Yandong Liu
2012-08-13T23:59:59.000Z
The LHC discovered the Standard Model (SM) like Higgs boson with mass around 125 GeV. However, there exist hints of deviations from Higgs decays. Including the Tevatron data, the deviations can be explained by the extremely mixed stop sector in the sense of best global fit (BGF). We analyze the relations among the competing reduced coupling hGG, Higgs boson mass,and LHC stop mass m_{\\wt t_1} lower bound at the tree- and one-loop level. In particular, we point out that we use the light stop running mass in the Higgs boson mass calculation while the light stop pole mass in the Higgs decays. So the gluino radiative correction on the light stop mass plays the crucial role. Its large negative correction saves the GBF in the Minimal Supersymmetric SM (MSSM) and the next to the MSSM (NMSSM) constrained by the perturbativity. Moreover, a light stop is predicted: in the MSSM if we set the gluino mass M_3\\lesssim4 TeV, we have m_{\\wt t_1}
Cappelli, M. [UTFISST, ENEA Casaccia, via Anguillarese 301, Rome (Italy); Gadomski, A. M. [ECONA, Centro Interuniversitario Elaborazione Cognitiva Sistemi Naturali e Artificiali, via dei Marsi 47, Rome (Italy); Sepiellis, M. [UTFISST, ENEA Casaccia, via Anguillarese 301, Rome (Italy); Wronikowska, M. W. [UTFISST, ENEA Casaccia, via Anguillarese 301, Rome (Italy); Poznan School of Social Sciences (Poland)
2012-07-01T23:59:59.000Z
In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safety Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)
Recompile if your codes run into MPICH error after the maintenance...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Recompile if your codes run into MPICH errors after the maintenance on 6252014 Recompile if your codes run into MPICH error after the maintenance on 6252014 June 27, 2014 (0...
Design techniques for graph-based error-correcting codes and their applications
Lan, Ching Fu
2006-04-12T23:59:59.000Z
-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...
V-109: Google Chrome WebKit Type Confusion Error Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
9: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code V-109: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute...
Cognitive analysis of students' errors and misconceptions in variables, equations, and functions
Li, Xiaobao
2009-05-15T23:59:59.000Z
such issues, three basic algebra concepts - variable, equation, and function – are used to analyze students’ errors, possible buggy algorithms, and the conceptual basis of these errors: misconceptions. Through the research on these three basic concepts...
Using Graphs for Fast Error Term Approximation of Time-varying Datasets
Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I
2003-02-27T23:59:59.000Z
We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.
T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service
Broader source: Energy.gov [DOE]
A remote user can cause the backend server to remain in an error state until the retry timeout expires.
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors
Sarkar, Abhra
2014-06-24T23:59:59.000Z
BAYESIAN SEMIPARAMETRIC DENSITY DECONVOLUTION AND REGRESSION IN THE PRESENCE OF MEASUREMENT ERRORS A Dissertation by ABHRA SARKAR Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment... Copyright 2014 Abhra Sarkar ABSTRACT Although the literature on measurement error problems is quite extensive, so- lutions to even the most fundamental measurement error problems like density de- convolution and regression with errors...
McReynolds, W.L. (Bonneville Power Administration, Vancouver, WA (US)); Badley, D.E. (N.W. Power Pool, Coordinating Office, Portland, OR (US))
1991-08-01T23:59:59.000Z
This paper describes an automatic generation control (AGC) system that simultaneously reduces time error and accumulated inadvertent interchange energy in interconnected power system. This method is automatic time error and accumulated inadvertent interchange reduction (AIIR). With this method control areas help correct the system time error when doing so also tends to correct accumulated inadvertent interchange. Thus in one step accumulated inadvertent interchange and system time error are corrected.
Lichtenbelt, J.H.; Schram, B.J.
1985-04-01T23:59:59.000Z
The availability of accurate equilibrium data is of high importance in chemical engineering practice both for design and research purposes. It appeared that for the gas absorption system water-acetone-air in the range of special interest for absorption and desorption operations, neither literature data nor calculations following UNIFAC gave a sufficient accuracy. An experimental program was set up to determine equilibrium data with an accuracy within 2% for low acetone concentrations (up to 7 wt % gas phase) at ambient temperature (16-30/sup 0/C) and atmospheric pressure (740-860 mmHg). From experiments the activity coefficient at infinite dilution of acetone ..gamma.. is found to be 6.79 (0.01) at 20/sup 0/C and 7.28 (0.01) at 25/sup 0/C, while the total error in ..gamma.. is 1.5%. The equilibrium constant can be calculated from ..gamma.. and shows the same error. The experimental data-fitting with procedures of Margules (two parameters) and Van Laar were successful, but NRTL, Wilson, and UNIQUAC failed, probably because of the small concentration range used.
An error correcting procedure for imperfect supervised, nonparametric classification
Ferrell, Dennis Ray
1973-01-01T23:59:59.000Z
ON INFORMATION THEORY . is active) . I'or simplicity in writing, Pr(B=B. ) will be ab- j breviated by Pr(B. ), and f(x/B=B ) will be abbreviated by j f (x/B. ) . The basic problem is, upon observing x, to determine j which class is active. If complete... to be B , r (x), is r (x) ( L Pr(B /x) i=1 The conditional probability of error can be minimized over j by assigning to a measurement x, the label value B such that minimizes r (x) . The rule which will do this is Bayes rule, b*. The resulting...
Optimum decoding of TCM in the presence of phase errors
Han, Jae Choong
1990-01-01T23:59:59.000Z
discussed. Our approach is to assume that intersymbol interference has been effectively removed by the equalizer while the phase tracking scheme has partially removed the phase jitter, in which case the output of the equalizer will have a slowly varying.... The DAL [I] used the decision at the output ol' the Viterbi decoder to demodulate the local c&arrier. The performance degradation of coded 8-PSK when disturbed by recovered carrier phase error and jitter is investigatecl in i'Gi, in which simulation...
Effects of color coding on keying time and errors
Wooldridge, Brenda Gail
1983-01-01T23:59:59.000Z
were to determine the effects if any oi' color coding upon the error rate and location time of special func- tion keys on a computer keyboard. An ACT-YA CRT keyboard interfaced with a Kromemco microcomputer was used. There were 84 high schoool... to comnunicate with more and more computer-like devices. The most common computer/human interface is the terminal, consisting of a display screen, and keyboard. The format and layout on the display screen of computer-generated information is generally...
Common Errors and Innovative Solutions Transcript | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't Your Destiny: Theof"Wave the WhiteNational| Department ofCommittee Report forCommon Errors
Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai
Sahai, Anant
Trade-off of lossless source coding error exponents Cheng Chang Anant Sahai HP Labs, Palo Alto EECS, UC Berkeley ISIT 2008 Chang (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 1 (HP Labs), Sahai ( UC Berkeley) Error Exponents trade-off ISIT 2008 2 / 14 #12;Stabilizing an unstable
A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang
Shen, Kai
A Memory Soft Error Measurement on Production Systems Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor- tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap
A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang
Shen, Kai
A Memory Soft Error Measurement on Production Systems # Xin Li Kai Shen Michael C. Huang University and dealing with these soft (or transient) errors is impor tant for system reliability. Several earlier for memory soft error measurement on production systems where performance impact on existing running ap
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing Systems
Pedram, Massoud
. INTRODUCTION Soft error resiliency has become a major concern for modern computing systems as CMOS technology systems [8, 9]. Although it is impossible to entirely eliminate spontaneous soft errors, they canAn Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing
Digication Error Message:"Your username is already in use by another account."
Barrash, Warren
Digication Error Message:"Your username is already in use by another account." You may need you have one). If you receive the error message below, here's how to log into your Digication account. (For example, if the error message appeared when using your employee account, switch to your employee
Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI
Hadjicostis, Christoforos
Non-Concurrent Error Detection and Correction in Fault-Tolerant Discrete-Time LTI Dynamic Systems encoded form and allow error detection and correction to be performed through concurrent parity checks (i that allows parity checks to capture the evolution of errors in the system and, based on non-concurrent parity
Error Analysis of Ia Supernova and Query on Cosmic Dark Energy
Qiuhe Peng; Yiming Hu; Kun Wang; Yu Liang
2012-01-16T23:59:59.000Z
Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than $0.55^m$, so we can't decide whether the universe is accelerating expansion or not.
Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences
Dominici, Francesca
1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure of air pollution and health. Because measurement error may have substantial implications for interpreting
Introduction to Small-Scale Wind Energy Systems (Including RETScreen...
Introduction to Small-Scale Wind Energy Systems (Including RETScreen Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale...
Laboratory Curiosity rover ChemCam team, including Los Alamos...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
MEXICO, August 23, 2012-Members of the Mars Science Laboratory Curiosity rover ChemCam team, including Los Alamos National Laboratory scientists, squeezed in a little extra target...
PLOT: A UNIX PROGRAM FOR INCLUDING GRAPHICS IN DOCUMENTS
Curtis, Pavel
2013-01-01T23:59:59.000Z
simple, easy-to-read graphics language designed specificallyPROGRAM FOR INCLUDING GRAPHICS IN DOCUMENTS Pavel Curtismeanings as in the GRAFPAC graphics system. Definl. ~ tions
analysis including plasma: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Assembly 2010 Space Plasmas in the Solar System, including Planetary Magnetospheres (D) Solar Variability, Cosmic Rays and Climate (D21) GEOMAGNETIC ACTIVITY AT HIGH-LATITUDE:...
Energy Department Expands Gas Gouging Reporting System to Include...
Washington, DC - Energy Secretary Samuel W. Bodman announced today that the Department of Energy has expanded its gas gouging reporting system to include a toll-free telephone...
arch dams including: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Websites Summary: insight into the gamut of shallow water waves, including kinematic, diffusion, dynamic, and gravity wavesDam-Breach Flood Wave Propagation Using...
Aperiodic dynamical decoupling sequences in presence of pulse errors
Zhi-Hui Wang; V. V. Dobrovitski
2011-01-12T23:59:59.000Z
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstrate that by applying the control pulses along different directions, the performance of QDD protocols can be noticeably improved, and explain the reason of such an improvement. Our results can be useful for future implementations of the aperiodic decoupling protocols, and for better understanding of the impact of errors on quantum control of spins.
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01T23:59:59.000Z
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
Articles which include chevron film cooling holes, and related processes
Bunker, Ronald Scott; Lacy, Benjamin Paul
2014-12-09T23:59:59.000Z
An article is described, including an inner surface which can be exposed to a first fluid; an inlet; and an outer surface spaced from the inner surface, which can be exposed to a hotter second fluid. The article further includes at least one row or other pattern of passage holes. Each passage hole includes an inlet bore extending through the substrate from the inlet at the inner surface to a passage hole-exit proximate to the outer surface, with the inlet bore terminating in a chevron outlet adjacent the hole-exit. The chevron outlet includes a pair of wing troughs having a common surface region between them. The common surface region includes a valley which is adjacent the hole-exit; and a plateau adjacent the valley. The article can be an airfoil. Related methods for preparing the passage holes are also described.
Discrete optimization methods to fit piecewise-affine models to data ...
2015-03-09T23:59:59.000Z
(a) A piecewise affine model with k = 2, fitting the eight data points. A = {ai}i?I .... where: i) for every j ? J, each group Aj is completely contained into the subdo-.
Discrete optimization methods to fit piecewise-affine models to data ...
Edoardo Amaldi
2015-03-08T23:59:59.000Z
Mar 8, 2015 ... polimi.it). Abstract: Fitting piecewise affine models to data points is a pervasive task in many scientific disciplines. In this work, we address the k- ...
Fast curve fitting using neural networks C. M. Bishop and C. M. Roach
Fast curve fitting using neural networks C. M. Bishop and C. M. Roach Citation: Rev. Sci. Instrum Kingdom C. M. Roach AEA Technology,Culham Laboratory, (Euratom/UkAEA Fusion Association)Oxon OX14 3DB
Bias Reduction and Goodness-of-Fit Tests in Conditional Logistic Regression Models
Sun, Xiuzhen
2011-10-21T23:59:59.000Z
in conditional logistic regression by solving a modified score equation. The resultant estimator not only reduces bias but also can prevent producing infinite value. Furthermore, we propose a method to calculate the standard error of the resultant estimator. A...
Sulfide stress cracking susceptible pipe fittings bought to NACE MR0175
McIntyre, D.R.; Moore, E.M. Jr. [Saudi Aramco, Dhahran (Saudi Arabia)
1995-09-01T23:59:59.000Z
The NACE MR0175 limit of R{sub c} 22 is non-conservative for cold-forged and stress relieved ASTM A234 WPB pipe fittings. Hardness surveys and sulfide stress cracking test results per ASTM G39 and NACE TMO177 Method B are presented. More stringent inspection and a hardness limit of BHN 197 (for cold-forged and stress relieved fittings only) are recommended to rectify this situation.
Zhong, Jing; Hou, Jinliang; Shen, Shyin; Yuan, Haibo; Huo, Zhiying; Zhang, Huihua; Xiang, Maosheng; Zhang, Huawai; Liu, Xiaowe
2015-01-01T23:59:59.000Z
We develop a template-fit method to automatically identify and classify late-type K and M dwarfs in spectra from the LAMOST. A search of the commissioning data, acquired in 2009-2010, yields the identification of 2612 late-K and M dwarfs. The template fit method also provides spectral classification to half a subtype, classifies the stars along the dwarf-subdwarf metallicity sequence, and provides improved metallicity/gravity information on a finer scale. The automated search and classification is performed using a set of cool star templates assembled from the Sloan Digital Sky Survey spectroscopic database. We show that the stars can be efficiently classified despite shortcomings in the LAMOST commissioning data which include bright sky lines in the red. In particular we find that the absolute and relative strengths of the critical TiO and CaH molecular bands around 7000A are cleanly measured, which provides accurate spectral typing from late-K to mid-M, and makes it possible to estimate metallicities in a w...
Kim, Taejeong
coefficients Neural signal Fitted spike Noise region Spike region Fig. 1. Flow chart of the encoding process the multi-resolution Teager energy operator (MTEO) method. And then each spike is fitted by a multi
Turbomachine injection nozzle including a coolant delivery system
Zuo, Baifang (Simpsonville, SC)
2012-02-14T23:59:59.000Z
An injection nozzle for a turbomachine includes a main body having a first end portion that extends to a second end portion defining an exterior wall having an outer surface. A plurality of fluid delivery tubes extend through the main body. Each of the plurality of fluid delivery tubes includes a first fluid inlet for receiving a first fluid, a second fluid inlet for receiving a second fluid and an outlet. The injection nozzle further includes a coolant delivery system arranged within the main body. The coolant delivery system guides a coolant along at least one of a portion of the exterior wall and around the plurality of fluid delivery tubes.
approach including back-translation: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
fits, inflaton excursions can safely take sub-Planckian values. Boubekeur, Lotfi; Mena, Olga; Ramrez, Hctor 2014-01-01 320 Software Engineering Approaches to Semantic Web...
[Article 1 of 7: Motivates and Includes the Consumer
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
surges; the extra cost of these premium features can be included in the electric service contract. The Smart Grid will mitigate PQ events that originate in the transmission and...
Including costs of supply chain risk in strategic sourcing decisions
Jain, Avani
2009-01-01T23:59:59.000Z
Cost evaluations do not always include the costs associated with risks when organizations make strategic sourcing decisions. This research was conducted to establish and quantify the impact of risks and risk-related costs ...
Carvill, Anna; Bushman, Kate; Ellsworth, Amy
2014-06-17T23:59:59.000Z
The EnergyFit Nevada (EFN) Better Buildings Neighborhood Program (BBNP, and referred to in this document as the EFN program) currently encourages Nevada residents to make whole-house energy-efficient improvements by providing rebates, financing, and access to a network of qualified home improvement contractors. The BBNP funding, consisting of 34 Energy Efficiency Conservation Block Grants (EECBG) and seven State Energy Program (SEP) grants, was awarded for a three-year period to the State of Nevada in 2010 and used for initial program design and implementation. By the end of first quarter in 2014, the program had achieved upgrades in 553 homes, with an average energy reduction of 32% per home. Other achievements included: ? Completed 893 residential energy audits and installed upgrades in 0.05% of all Nevada single-family homes1 ? Achieved an overall conversation rate of 38.1%2 ? 7,089,089 kWh of modeled energy savings3 ? Total annual homeowner energy savings of approximately $525,7523 ? Efficiency upgrades completed on 1,100,484 square feet of homes3 ? $139,992 granted in loans to homeowners for energy-efficiency upgrades ? 29,285 hours of labor and $3,864,272 worth of work conducted by Nevada auditors and contractors4 ? 40 contractors trained in Nevada ? 37 contractors with Building Performance Institute (BPI) certification in Nevada ? 19 contractors actively participating in the EFN program in Nevada 1 Calculated using 2012 U.S. Census data reporting 1,182,870 homes in Nevada. 2 Conversion rate through March 31, 2014, for all Nevada Retrofit Initiative (NRI)-funded projects, calculated using the EFN tracking database. 3 OptiMiser energy modeling, based on current utility rates. 4 This is the sum of $3,596,561 in retrofit invoice value and $247,711 in audit invoice value.
Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.
2013-03-19T23:59:59.000Z
Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.
Limited Personal Use of Government Office Equipment including Information Technology
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2005-01-07T23:59:59.000Z
The Order establishes requirements and assigns responsibilities for employees' limited personal use of Government resources (office equipment and other resources including information technology) within DOE, including NNSA. The Order is required to provide guidance on appropriate and inappropriate uses of Government resources. This Order was certified 04/23/2009 as accurate and continues to be relevant and appropriate for use by the Department. Certified 4-23-09. No cancellation.
Hybrid powertrain system including smooth shifting automated transmission
Beaty, Kevin D.; Nellums, Richard A.
2006-10-24T23:59:59.000Z
A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.
Taylor, Peter
1 THE CENTRAL CONCEPTS OF INCLUSIVE FITNESS 3000 word article in the Oxford University Press with alternative traits have lower fitness than "normal" individuals who exhibit the established trait. For example is that the neighbours of the taller individual would have slightly less fitness and, because of limited dispersal
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22T23:59:59.000Z
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Statistical evaluation of design-error related accidents
Ott, K.O.; Marchaterre, J.F.
1980-01-01T23:59:59.000Z
In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy.
Sayer, R.O.
2003-07-29T23:59:59.000Z
RSAP [1] is a computer code for display and manipulation of neutron cross section data and selected SAMMY output. SAMMY [2] is a multilevel R-matrix code for fitting neutron time-of-flight cross-section data using Bayes' method. This users' guide provides documentation for the recently updated RSAP code (version 6). The code has been ported to the Linux platform, and several new features have been added, including the capability to read cross section data from ASCII pointwise ENDF files as well as double-precision PLT output from SAMMY. A number of bugs have been found and corrected, and the input formats have been improved. Input items are parsed so that items may be separated by spaces or commas.
Optical pattern recognition architecture implementing the mean-square error correlation algorithm
Molley, Perry A. (Albuquerque, NM)
1991-01-01T23:59:59.000Z
An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.
Solar Energy Education. Renewable energy: a background text. [Includes glossary
Not Available
1985-01-01T23:59:59.000Z
Some of the most common forms of renewable energy are presented in this textbook for students. The topics include solar energy, wind power hydroelectric power, biomass ocean thermal energy, and tidal and geothermal energy. The main emphasis of the text is on the sun and the solar energy that it yields. Discussions on the sun's composition and the relationship between the earth, sun and atmosphere are provided. Insolation, active and passive solar systems, and solar collectors are the subtopics included under solar energy. (BCS)
Methods of producing adsorption media including a metal oxide
Mann, Nicholas R; Tranter, Troy J
2014-03-04T23:59:59.000Z
Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.
Metal vapor laser including hot electrodes and integral wick
Ault, Earl R. (Livermore, CA); Alger, Terry W. (Tracy, CA)
1995-01-01T23:59:59.000Z
A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube.
Metal vapor laser including hot electrodes and integral wick
Ault, E.R.; Alger, T.W.
1995-03-07T23:59:59.000Z
A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube. 5 figs.
Watson Library enhancements to include new service desk
2008-01-01T23:59:59.000Z
12/5/13 KU Libraries News: Watson Library enhancements to include new service desk www.lib.ku.edu/news/newservicedesk.shtml 1/1 Contact Us The University of Kansas Libraries Lawrence, KS 66045 (785) 864-8983 Copyright © 2013 by the University... of Kansas Watson Library enhancements to include new service desk The University of Kansas Libraries is adding a new service desk to Watson Library to enhance the user experience and draw attention to new and existing resources. The desk, which...
Thin film solar cell including a spatially modulated intrinsic layer
Guha, Subhendu (Troy, MI); Yang, Chi-Chung (Troy, MI); Ovshinsky, Stanford R. (Bloomfield Hills, MI)
1989-03-28T23:59:59.000Z
One or more thin film solar cells in which the intrinsic layer of substantially amorphous semiconductor alloy material thereof includes at least a first band gap portion and a narrower band gap portion. The band gap of the intrinsic layer is spatially graded through a portion of the bulk thickness, said graded portion including a region removed from the intrinsic layer-dopant layer interfaces. The band gap of the intrinsic layer is always less than the band gap of the doped layers. The gradation of the intrinsic layer is effected such that the open circuit voltage and/or the fill factor of the one or plural solar cell structure is enhanced.
Kraan, Aafke C., E-mail: aafke.kraan@pi.infn.it [Erasmus MC Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Water, Steven van de; Teguh, David N.; Al-Mamgani, Abrahim [Erasmus MC Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Madden, Tom; Kooy, Hanne M. [Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Heijmen, Ben J.M.; Hoogeman, Mischa S. [Erasmus MC Daniel den Hoed Cancer Center, Rotterdam (Netherlands)
2013-12-01T23:59:59.000Z
Purpose: Setup, range, and anatomical uncertainties influence the dose delivered with intensity modulated proton therapy (IMPT), but clinical quantification of these errors for oropharyngeal cancer is lacking. We quantified these factors and investigated treatment fidelity, that is, robustness, as influenced by adaptive planning and by applying more beam directions. Methods and Materials: We used an in-house treatment planning system with multicriteria optimization of pencil beam energies, directions, and weights to create treatment plans for 3-, 5-, and 7-beam directions for 10 oropharyngeal cancer patients. The dose prescription was a simultaneously integrated boost scheme, prescribing 66 Gy to primary tumor and positive neck levels (clinical target volume-66 Gy; CTV-66 Gy) and 54 Gy to elective neck levels (CTV-54 Gy). Doses were recalculated in 3700 simulations of setup, range, and anatomical uncertainties. Repeat computed tomography (CT) scans were used to evaluate an adaptive planning strategy using nonrigid registration for dose accumulation. Results: For the recalculated 3-beam plans including all treatment uncertainty sources, only 69% (CTV-66 Gy) and 88% (CTV-54 Gy) of the simulations had a dose received by 98% of the target volume (D98%) >95% of the prescription dose. Doses to organs at risk (OARs) showed considerable spread around planned values. Causes for major deviations were mixed. Adaptive planning based on repeat imaging positively affected dose delivery accuracy: in the presence of the other errors, percentages of treatments with D98% >95% increased to 96% (CTV-66 Gy) and 100% (CTV-54 Gy). Plans with more beam directions were not more robust. Conclusions: For oropharyngeal cancer patients, treatment uncertainties can result in significant differences between planned and delivered IMPT doses. Given the mixed causes for major deviations, we advise repeat diagnostic CT scans during treatment, recalculation of the dose, and if required, adaptive planning to improve adequate IMPT dose delivery.
The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs
Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske
2011-06-01T23:59:59.000Z
All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.
account positioning errors: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Websites Summary: analysis and providing assistance to division Financial Analysts. ?Support Finance group in General Accounting functions including accounts payable, accounts...
Including Blind Students in Computer Science Through Access to Graphs
Young, R. Michael
Including Blind Students in Computer Science Through Access to Graphs Suzanne Balik, Sean Mealin SKetching tool, GSK, to provide blind and sighted people with a means to create, examine, and share graphs (node-link diagrams) in real-time. GSK proved very effective for one blind computer science student
Bayesian hierarchical reconstruction of protein profiles including a digestion model
Paris-Sud XI, Université de
Bayesian hierarchical reconstruction of protein profiles including a digestion model Pierre to recover the protein biomarkers content in a robust way. We will focus on the digestion step since and each branch to a molecular processing such as digestion, ionisation and LC-MS separation
Biomass Potentials from California Forest and Shrublands Including Fuel
Biomass Potentials from California Forest and Shrublands Including Fuel Reduction Potentials-04-004 February 2005 Revised: October 2005 Arnold Schwarzenegger, Governor, State of California #12;Biomass Tiangco, CEC Bryan M. Jenkins, University of California #12;Biomass Potentials from California Forest
Optimal Energy Management Strategy including Battery Health through Thermal
Paris-Sud XI, Université de
Optimal Energy Management Strategy including Battery Health through Thermal Management for Hybrid: Energy management strategy, Plug-in hybrid electric vehicles, Li-ion battery aging, thermal management, Pontryagin's Minimum Principle. 1. INTRODUCTION The interest for energy management strategy (EMS) of Hybrid
Area of cooperation includes: Joint research and development on
Buyya, Rajkumar
Technologies August 2, 2006: HCL Technologies Ltd (HCL), India's leading global IT services company, has signed projects that are using this technology currently such as BioGrid in Japan, National Grid Service in UKArea of cooperation includes: Â· Joint research and development on Grid computing technologies
Energy Transitions: A Systems Approach Including Marcellus Shale Gas Development
Walter, M.Todd
Energy Transitions: A Systems Approach Including Marcellus Shale Gas Development A Report Engineering) W. VA #12;Energy Transitions: A Systems Approach August 2011 version Page 2 Energy Transitions sources globally, some very strong short-term drivers of energy transitions reflect rising concerns over
1 INTRODUCTION A typical flexible pavement system includes four
Zornberg, Jorge G.
1 INTRODUCTION A typical flexible pavement system includes four distinct layers: asphalt concrete course in order to reduce costs or to minimize capil- lary action under the pavement. Figure 1: Cross-section of flexible pavement system (Muench 2006) Pavement distress may occur due to either traffic or environmental
SAFETY AND HEALTH PROGRAM Including the Chemical Hygiene Plan
Evans, Paul G.
SAFETY AND HEALTH PROGRAM Including the Chemical Hygiene Plan Wisconsin Center for Applied, Technical Staff & Chemical Hygiene Officer kakupcho@wisc.edu 262-2982 Lab Facility Website http..........................................................................................................3 CHEMICAL HYGIENE PLAN III. Work-site Analysis and Hazard Identification 3.1 Hazardous Chemical
HTS Conductor Design Issues Including Quench and Stability,
HTS Conductor Design Issues Including Quench and Stability, AC Losses, and Fault Currents M. J objective and technical approach · The purpose of this collaborative R&D project is an investigation of HTS conductor design optimization with emphasis on stability and protection issues for YBCO wires and coils
Free Energy Efficiency Kit includes CFL light bulbs,
Rose, Annkatrin
Free Energy Efficiency Kit Kit includes CFL light bulbs, spray foam, low-flow shower head, and more for discounted energy assessments. FREE HOME ENERGY EFFICIENCY SEMINAR N e w R i ver L i g ht & Pow e r a n d W! Building Science 101 Presentation BPI Certified Building Professionals will present home energy efficiency
DO NOT INCLUDE: flatten cardboard staples, tape & envelope windows ok
Wolfe, Patrick J.
/ bottles Metal items other than cans/foil Napkins Paper towels Plastic bags Plastic films Plastic utensilsDO NOT INCLUDE: flatten cardboard staples, tape & envelope windows ok Aerosol cans Books Bottle, PDAs, inkjet cartridges, CFL bulbs (cushioned, sealed in plastic) computers, printers, printer
cDNA encoding a polypeptide including a hevein sequence
Raikhel, N.V.; Broekaert, W.F.; Namhai Chua; Kush, A.
1993-02-16T23:59:59.000Z
A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1,018 nucleotides long and includes an open reading frame of 204 amino acids.
Perhaps federal research grants can include infrastructure costs.
Sur, Mriganka
Perhaps federal research grants can include infrastructure costs. There are signs to find favour in China, a country beset by similar problems. The particular structure of Indian science and healthystart-uppackages. The government could contribute to these costs. 487 NATURE|Vol 436|28 July 2005
Rawle, Tim
to known cool-core clusters. The green line shows the best fit template from the Rieke+09 library, from, E. Egami1, A. Edge2, M. Rex1, for the Herschel Lensing Survey and LoCuSS collaborations 1Steward the Herschel Lensing Survey (HLS) and Local Cluster Substructure Survey (LoCuSS). The sample includes known
Contagious error sources would need time travel to prevent quantum computation
Gil Kalai; Greg Kuperberg
2015-05-07T23:59:59.000Z
We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.
Kaeli, David R.
A Field Analysis of System-level Effects of Soft Errors Occurring in Microprocessors used, will generate sufficient charge to cause a soft error. In the absence of error correction schemes, the system rates for unprotected systems [8]. Soft errors are emerging as a significant obstacle to increasing
Kaeli, David R.
A Field Failure Analysis of Microprocessors used in Information Systems Abstract Soft errors due from error logs and error traces of the microprocessors collected from systems in the field. Soft focus on soft error rate (SER) estimation of microprocessors used in information systems by analyzing
Pitch Error and Shear Web Disbond Detection on Wind Turbine Blades...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
American Institute of Aeronautics and Astronautics 1 Pitch Error and Shear Web Disbond Detection on Wind Turbine Blades for Offshore Structural Health and Prognostics Management...
Accounting for model error due to unresolved scales within ensemble Kalman filtering
Lewis Mitchell; Alberto Carrassi
2014-09-02T23:59:59.000Z
We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through the proposed model error treatments, and that both methods require far less parameter tuning than the standard approach. Furthermore, the proposed approach is simple to implement within a pre-existing ensemble based scheme. The general implications for the use of the proposed approach in the framework of square-root filters such as the ETKF are also discussed.
V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service...
Broader source: Energy.gov (indexed) [DOE]
the target resolver to crash IMPACT: Triggering this defect will cause the affected server to exit with an error, denying service to recursive DNS clients that use that...
Efficient Small Area Estimation in the Presence of Measurement Error in Covariates
Singh, Trijya
2012-10-19T23:59:59.000Z
for the four estimators, yi, eYiS, bYiME, bYiSIMEX when the number of small areas is 100, measure- ment error variance Ci = 3 and 2v = 4. k is the percentage of areas having auxiliary information measured with error. : : : : : : : 52 2 Absolute value... 3 Jackknife estimates of the mean squared error of the Lohr-Ybarra estimator bYiME and the SIMEX estimator bYiSIMEX when the num- ber of small areas is 100, measurement error variance Ci = 2 and 2v = 4. k is the percentage of areas having...
Choose and choose again: appearance-reality errors, pragmatics and logical ability
Deák, Gedeon O; Enright, Brian
2006-01-01T23:59:59.000Z
Development, 62, 753–766. Speer, J.R. (1984). Two practicalolder still make errors (e.g. Speer, 1984), some preschool
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-01-01T23:59:59.000Z
Development, 62, 753-766. Speer, J. R. (1984). Two practicalolder still make errors (e.g. , Speer, 1984), some preschool
The Importance of Run-time Error Detection Glenn R. Luecke 1
Luecke, Glenn R.
Iowa State University's High Performance Computing Group, Iowa State University, Ames, Iowa 50011, USA State University's High Performance Computing Group for evaluating run-time error detection capabilities
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2008-06-24T23:59:59.000Z
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
West, Randy
2011-01-01T23:59:59.000Z
2.1 Noisy Channel Models in SMT . . . . . . . .esl errors using phrasal smt techniques. In Proceedings ofet al. (2006) use phrasal SMT techniques to identify and
A probabilistic formulation of evolutionary synthesis models: implications for SED fittings
M. Cervino; V. Luridiana
2007-02-15T23:59:59.000Z
Evolutionary synthesis models (ESM) have been extensively used to obtain the star formation history in galaxies by means of SED fitting. Implicit in this use of ESM is that (a) for given evolutionary parameters, the shape of the SED is fixed whatever the size of the observed cluster (b) all regions of the observed SED have the same weight in the fit. However, Nature does not follow these two assumptions, as is implied by the existence of Surface Brightness Fluctuations in galaxies and as can be shown by simple logical arguments.
Light curve of a source orbiting around a black hole: A fitting-formula
V. Karas
1996-05-15T23:59:59.000Z
A simple, analytical fitting-formula for a photometric light curve of a source of light orbiting around a black hole is presented. The formula is applicable for sources on a circular orbit with radius smaller than 45 gravitational radii from the black hole. This range of radii requires gravitational focusation of light rays and the Doppler effect to be taken into account with care. The fitting-formula is therefore useful for modelling the X-ray variability of inner regions in active galactic nuclei.