Plasma dynamics and a significant error of macroscopic averaging
Marek A. Szalek
2005-05-22
The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.
Even-Parity S_(N) Adjoint Method Including SP_(N) Model Error and Iterative Efficiency
Zhang, Yunhuang
2014-08-10
In this Dissertation, we analyze an adjoint-based approach for assessing the model error of SP_(N) equations (low fidelity model) by comparing it against S_(N) equations (high fidelity model). Three model error estimation methods, namely, direct...
Julien M. E. Fraïsse; Daniel Braun
2015-04-13
We investigate in detail a recently introduced "coherent averaging scheme" in terms of its usefulness for achieving Heisenberg limited sensitivity in the measurement of different parameters. In the scheme, $N$ quantum probes in a product state interact with a quantum bus. Instead of measuring the probes directly and then averaging as in classical averaging, one measures the quantum bus or the entire system and tries to estimate the parameters from these measurement results. Combining analytical results from perturbation theory and an exactly solvable dephasing model with numerical simulations, we draw a detailed picture of the scaling of the best achievable sensitivity with $N$, the dependence on the initial state, the interaction strength, the part of the system measured, and the parameter under investigation.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Spacetime Averaged Null Energy Condition
Douglas Urban; Ken D. Olum
2010-06-13
The averaged null energy condition has known violations for quantum fields in curved space, even if one considers only achronal geodesics. Many such examples involve rapid variation in the stress-energy tensor in the vicinity of the geodesic under consideration, giving rise to the possibility that averaging in additional dimensions would yield a principle universally obeyed by quantum fields. However, after discussing various procedures for additional averaging, including integrating over all dimensions of the manifold, we give a class of examples that violate any such averaged condition.
Spacetime averaged null energy condition
Urban, Douglas; Olum, Ken D.
2010-06-15
The averaged null energy condition has known violations for quantum fields in curved space, even when one considers only achronal geodesics. Many such examples involve rapid variation in the stress-energy tensor in the vicinity of the geodesic under consideration, giving rise to the possibility that averaging in additional dimensions would yield a principle universally obeyed by quantum fields. However, after discussing various procedures for additional averaging, including integrating over all dimensions of the manifold, we give here a class of examples that violate any such averaged condition.
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Simonen, Fredric A.; Gosselin, Stephen R.; Doctor, Steven R.
2013-04-22
This document describes a new method to determine whether the flaws in a particular reactor pressure vessel are consistent with the assumptions regarding the number and sizes of flaws used in the analyses that formed the technical justification basis for the new voluntary alternative Pressurized Thermal Shock (PTS) rule (Draft 10 CFR 50.61a). The new methodology addresses concerns regarding prior methodology because ASME Code Section XI examinations do not detect all fabrication flaws, they have higher detection performance for some flaw types, and there are flaw sizing errors always present (e.g., significant oversizing of small flaws and systematic under sizing of larger flaws). The new methodology allows direct comparison of ASME Code Section XI examination results with values in the PTS draft rule Tables 2 and 3 in order to determine if the number and sizes of flaws detected by an ASME Code Section XI examination are consistent with those assumed in the probabilistic fracture mechanics calculations performed in support of the development of 10 CFR 50.61a.
H. Essen
2004-01-28
This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to three parts: center of mass, rotational, plus the remaining internal energy relative to an optimally translating and rotating frame.
Photometric Redshifts and Photometry Errors
D. Wittman; P. Riechers; V. E. Margoniner
2007-09-21
We examine the impact of non-Gaussian photometry errors on photometric redshift performance. We find that they greatly increase the scatter, but this can be mitigated to some extent by incorporating the correct noise model into the photometric redshift estimation process. However, the remaining scatter is still equivalent to that of a much shallower survey with Gaussian photometry errors. We also estimate the impact of non-Gaussian errors on the spectroscopic sample size required to verify the photometric redshift rms scatter to a given precision. Even with Gaussian {\\it photometry} errors, photometric redshift errors are sufficiently non-Gaussian to require an order of magnitude larger sample than simple Gaussian statistics would indicate. The requirements increase from this baseline if non-Gaussian photometry errors are included. Again the impact can be mitigated by incorporating the correct noise model, but only to the equivalent of a survey with much larger Gaussian photometry errors. However, these requirements may well be overestimates because they are based on a need to know the rms, which is particularly sensitive to tails. Other parametrizations of the distribution may require smaller samples.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Neutralino relic density including coannihilations
Paolo Gondolo; Joakim Edsjo
1997-11-25
We give an overview of our precise calculation of the relic density of the lightest neutralino, in which we included relativistic Boltzmann averaging, subthreshold and resonant annihilations, and coannihilation processes with charginos and neutralinos.
StructuralHammingDistance Average SHD Results -Child -Sample Size 500
Brown, Laura E.
, and Hailfinder10 Networks. C22_complete_shd_results.tex; 5/08/2005; 16:24; p.2 #12;0 500 1000 1500 x MMHC OR1k=5 GES StructuralHammingDistance Average SHD Results - Child - Sample Size 500 Error Bars = +/- Std TPDA GES StructuralHammingDistance Average SHD Results - Child3 - Sample Size 500 Error Bars = +/- Std
Monte Carlo errors with less errors
Ulli Wolff
2006-11-29
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Averaging Hypotheses in Newtonian Cosmology
T. Buchert
1995-12-20
Average properties of general inhomogeneous cosmological models are discussed in the Newtonian framework. It is shown under which circumstances the average flow reduces to a member of the standard Friedmann--Lema\\^\\i tre cosmologies. Possible choices of global boundary conditions of inhomogeneous cosmologies as well as consequences for the interpretation of cosmological parameters are put into perspective.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Reversible (unitary) Ancillary qbits Controlled gates (cX, cZ) #12;Measurement Deterministic Duplication;Decoding use ancillary bits to determine what error occurred #12;Decoding use ancillary bits to determine what error occurred set to 0 if first two bits equal, set to 1 if not #12;Decoding use ancillary bits
The 2009 World Average of $?_s$
Siegfried Bethke
2009-08-15
Measurements of $\\alpha_s$, the coupling strength of the Strong Interaction between quarks and gluons, are summarised and an updated value of the world average of $\\alpha_s (M_Z)$ is derived. Building up on previous reviews, special emphasis is laid on the most recent determinations of $\\alpha_s$. These are obtained from $\\tau$-decays, from global fits of electroweak precision data and from measurements of the proton structure function $\\F_2$, which are based on perturbative QCD calculations up to $O(\\alpha_s^4)$; from hadronic event shapes and jet production in $\\epem$ annihilation, based on $O(\\alpha_s^3) $ QCD; from jet production in deep inelastic scattering and from $\\Upsilon$ decays, based on $O(\\alpha_s^2) $ QCD; and from heavy quarkonia based on unquenched QCD lattice calculations. Applying pragmatic methods to deal with possibly underestimated errors and/or unknown correlations, the world average value of $\\alpha_s (M_Z)$ results in $\\alpha_s (M_Z) = 0.1184 \\pm 0.0007$. The measured values of $\\alpha_s (Q)$, covering energy scales from $Q \\equiv \\mtau = 1.78$ GeV to 209 GeV, exactly follow the energy dependence predicted by QCD and therefore significantly test the concept af Asymptotic Freedom.
Thermodynamics of error correction
Pablo Sartori; Simone Pigolotti
2015-04-24
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
k=10 GS PC TPDA GES Average SHD Results -Child -Sample Size 500
Brown, Laura E.
TPDA GES Structural Hamming Distance Average SHD Results - Child - Sample Size 500 Error Bars = +/- Std GS PC TPDA GES Structural Hamming Distance Average SHD Results - Child3 - Sample Size 500 Error Bars = +/- Std.Dev. 0 100 200 300 400 500 600 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC
Quantum Averages of Weak Values
Yakir Aharonov; Alonso Botero
2005-08-23
We re-examine the status of the weak value of a quantum mechanical observable as an objective physical concept, addressing its physical interpretation and general domain of applicability. We show that the weak value can be regarded as a \\emph{definite} mechanical effect on a measuring probe specifically designed to minimize the back-reaction on the measured system. We then present a new framework for general measurement conditions (where the back-reaction on the system may not be negligible) in which the measurement outcomes can still be interpreted as \\emph{quantum averages of weak values}. We show that in the classical limit, there is a direct correspondence between quantum averages of weak values and posterior expectation values of classical dynamical properties according to the classical inference framework.
Abdelhamid Awad Aly Ahmed, Sala
2008-10-10
by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the O–ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major Subject: Computer Science QUANTUM ERROR CONTROL CODES A... Members, Mahmoud M. El-Halwagi Anxiao (Andrew) Jiang Rabi N. Mahapatra Head of Department, Valerie Taylor May 2008 Major Subject: Computer Science iii ABSTRACT Quantum Error Control Codes. (May 2008) Salah Abdelhamid Awad Aly Ahmed, B.Sc., Mansoura...
Average gluon and quark jet multiplicities
A. V. Kotikov
2014-11-30
We show the results in [1,2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the MSbar factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, alphas(Mz)=0.1199 +- 0.0026 has been obtained in the MSbar scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
Average-Atom Thomson Scattering
Johnson, Walter R.
-Atom Approximation W. R. Johnson, Notre Dame J. Nilsen & K. T. Cheng, LLNL The cross section for Thomson scattering Average-Atom Model Divide plasma into WS cells with a nucleus and Z electrons p2 2 - Z r + V a(r) = a a(r) V(r) = VKS(n(r), r) n(r) = nb(r) + nc(r) 4r2nb(r) = nl 2(2l+1) 1+exp[( nl -Âµ)/kBT] Pnl(r)2 Z = r
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D. [Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States) and Center for Theoretical Physics, Laboratory for Nuclear Science, and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts 02155 (United States)
2007-09-15
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
Error Dynamics: The Dynamic Emergence of Error Avoidance and
Bickhard, Mark H.
. Standard such notions are, however, arguably limited and bad notions, being based on untenable models of learning about error and of handling error knowledge constitute a complex major theme in evolution VICARIANTS Avoiding Error. The central theme is a progressive elaboration of kinds of dynamics that manage
Shared dosimetry error in epidemiological dose-response analyses
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.
Shared dosimetry error in epidemiological dose-response analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
A New World Average Value for the Neutron Lifetime
A. P. Serebrov; A. K. Fomin
2010-05-27
The analysis of the data on measurements of the neutron lifetime is presented. A new most accurate result of the measurement of neutron lifetime [Phys. Lett. B 605 (2005) 72] 878.5 +/- 0.8 s differs from the world average value [Phys. Lett. B 667 (2008) 1] 885.7 +/- 0.8 s by 6.5 standard deviations. In this connection the analysis and Monte Carlo simulation of experiments [Phys. Lett. B 483 (2000) 15] and [Phys. Rev. Lett. 63 (1989) 593] is carried out. Systematic errors of about -6 s are found in each of the experiments. The summary table for the neutron lifetime measurements after corrections and additions is given. A new world average value for the neutron lifetime 879.9 +/- 0.9 s is presented.
Oliver, Todd A., 1980-
2008-01-01
This thesis presents high-order, discontinuous Galerkin (DG) discretizations of the Reynolds-Averaged Navier-Stokes (RANS) equations and an output-based error estimation and mesh adaptation algorithm for these discretizations. ...
Neutron multiplication error in TRU waste measurements
Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are more realistic and accurate. To do so, measurements of standards and waste drums were performed with High Efficiency Neutron Counters (HENC) located at Los Alamos National Laboratory (LANL). The data were analyzed for multiplication effects and new estimates of the multiplication error were computed. A concluding section will present alternatives for reducing the number of rejections of TRU waste containers due to neutron multiplication error.
Optimal error estimates for corrected trapezoidal rules
Talvila, Erik
2012-01-01
Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
Spectral averaging techniques for Jacobi matrices
Rafael del Rio; Carmen Martinez; Hermann Schulz-Baldes
2008-02-20
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar*
Potkonjak, Miodrag
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar* , Sasha Slijepcevic , Miodrag is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined in order applications, including multi- modal sensor fusion, is to ensure that all of the techniques and tools are error
Prevosto, L.; Mancinelli, B.; Kelly, H.; Instituto de Física del Plasma , Departamento de Física, Facultad de Ciencias Exactas y Naturales Ciudad Universitaria Pab. I, 1428 Buenos Aires
2013-12-15
This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.
Pump apparatus including deconsolidator
Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew
2014-10-07
A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.
A complete Randomized Benchmarking Protocol accounting for Leakage Errors
T. Chasseur; F. K. Wilhelm
2015-07-09
Randomized Benchmarking allows to efficiently and scalably characterize the average error of an unitary 2-design such as the Clifford group $\\mathcal{C}$ on a physical candidate for quantum computation, as long as there are no non-computational leakage levels in the system. We investigate the effect of leakage errors on Randomized Benchmarking induced from an additional level per physical qubit and provide a modified protocol that allows to derive reliable estimates for the error per gate in their presence. We assess the variance of the sequence fidelity corresponding to the number of random sequences needed for valid fidelity estimation. Our protocol allows for gate dependent error channels without being restricted to perturbations. We show that our protocol is compatible with Interleaved Randomized Benchmarking and expand to benchmarking of arbitrary gates. This setting is relevant for superconducting transmon qubits, among other systems.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.
MESOSCALE AVERAGING OF NUCLEATION AND GROWTH MODELS
Ferguson, Thomas S.
MESOSCALE AVERAGING OF NUCLEATION AND GROWTH MODELS MARTIN BURGER , VINCENZO CAPASSO , AND LIVIO-Kolmogorov relations for the degree of crystallinity. By relating the computation of expected values to mesoscale averaging, we obtain a suitable description of the process at the mesoscale. We show how the variance
Optimal Average Cost Manufacturing Flow Controllers
Veatch, Michael H.
policy the differ- ential cost is C1 on attractive control switching boundaries. Index Terms Average costOptimal Average Cost Manufacturing Flow Controllers: Convexity and Differentiability Michael H and differentiability of the differential cost function are investigated. It is proven that under an optimal control
Averages in vector spaces over finite fields
Wright J.; Carbery A.; Stones B.
2008-01-01
We study the analogues of the problems of averages and maximal averages over a surface in R-n when the euclidean structure is replaced by that of a vector space over a finite field, and obtain optimal results in a number ...
DATA COMPRESSION USING WAVELETS: ERROR ...
1910-90-11
algorithms that introduce differences between the original and compressed data in ... to choose an error metric that parallels the human visual system, so that image .... signal data along a communications channel, one sends integer codes that ...
The Challenge of Quantum Error Correction.
Fominov, Yakov
in the design of physical bits. #12;What we need Hardware requirements: 1. Many 103-104 / R individual bits (R flip classical error b. Phase error 0exp( ( ) )z i E t dt = - Fluctuates 1. Need hardware error #12;Classical error correction by the software and hardware. , / 2 0 Hardware error correction: Ising
Time-averaged quantum dynamics and the validity of the effective...
Office of Scientific and Technical Information (OSTI)
develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an effective Hamiltonian, as...
STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES
CALIFORNIA ENERGY COMMISSION STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES 2005 TO 2018 Mignon Marks Principal Author Mignon Marks Project Manager David Ashuckian Manager ELECTRICITY ANALYSIS OFFICE Sylvia Bender Acting Deputy Director ELECTRICITY SUPPLY DIVISION B.B. Blevins Executive Director
Distributed Averaging Via Lifted Markov Chains
Jung, Kyomin
Motivated by applications of distributed linear estimation, distributed control, and distributed optimization, we consider the question of designing linear iterative algorithms for computing the average of numbers in a ...
Thermal ghost imaging with averaged speckle patterns
Shapiro, Jeffrey H.
We present theoretical and experimental results showing that a thermal ghost imaging system can produce images of high quality even when it uses detectors so slow that they respond only to intensity-averaged (that is, ...
Selling Geothermal Systems The "Average" Contractor
Selling Geothermal Systems #12;The "Average" Contractor · History of sales procedures · Manufacturer Driven Procedures · What makes geothermal technology any harder to sell? #12;"It's difficult to sell a geothermal system." · It should
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Average transmission probability of a random stack
Yin Lu; Christian Miniatura; Berthold-Georg Englert
2009-07-31
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower bounds. The upper bound, when used as an approximation for the transmission probability, is unreasonably good and we conjecture that it is asymptotically exact.
Using CO2 spatial variability to quantify representation errors of satellite CO2 retrievals
Michalak, Anna M.
global data of column- averaged CO2 dry-air mole fraction (XCO2) at high spatial resolutions. These dataUsing CO2 spatial variability to quantify representation errors of satellite CO2 retrievals A. A 2008; published 29 August 2008. [1] Satellite measurements of column-averaged CO2 dry- air mole
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Non-Gaussian numerical errors versus mass hierarchy
Y. Meurice; M. B. Oktay
2000-05-12
We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.
Communication error detection using facial expressions
Wang, Sy Bor, 1976-
2008-01-01
Automatic detection of communication errors in conversational systems typically rely only on acoustic cues. However, perceptual studies have indicated that speakers do exhibit visual communication error cues passively ...
(Approximate) Low-Mode Averaging with a new Multigrid Eigensolver
Gunnar Bali; Sara Collins; Andreas Frommer; Karsten Kahl; Issaku Kanamori; Benjamin Müller; Matthias Rottmann; Jakob Simeth
2015-09-23
We present a multigrid based eigensolver for computing low-modes of the Hermitian Wilson Dirac operator. For the non-Hermitian case multigrid methods have already replaced conventional Krylov subspace solvers in many lattice QCD computations. Since the $\\gamma_5$-preserving aggregation based interpolation used in our multigrid method is valid for both, the Hermitian and the non-Hermitian case, inversions of very ill-conditioned shifted systems with the Hermitian operator become feasible. This enables the use of multigrid within shift-and-invert type eigensolvers. We show numerical results from our MPI-C implementation of a Rayleigh quotient iteration with multigrid. For state-of-the-art lattice sizes and moderate numbers of desired low-modes we achieve speed-ups of an order of magnitude and more over PARPACK. We show results and develop strategies how to make use of our eigensolver for calculating disconnected contributions to hadronic quantities that are noisy and still computationally challenging. Here, we explore the possible benefits, using our eigensolver for low-mode averaging and related methods with high and low accuracy eigenvectors. We develop a low-mode averaging type method using only a few of the smallest eigenvectors with low accuracy. This allows us to avoid expensive exact eigensolves, still benefitting from reduced statistical errors.
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Table 1. Real Average Transportation and Delivered Costs of Coal...
U.S. Energy Information Administration (EIA) Indexed Site
Real Average Transportation and Delivered Costs of Coal, By Year and Primary Transport Mode" "Year","Average Transportation Cost of Coal (Dollars per Ton)","Average Delivered Cost...
Laser Fusion Energy The High Average Power
Laser Fusion Energy and The High Average Power Program John Sethian Naval Research Laboratory Dec for Inertial Fusion Energy with lasers, direct drive targets and solid wall chambers Lasers DPPSL (LLNL) Kr posters Snead Payne #12;Laser(s) Goals 1. Develop technologies that can meet the fusion energy
Extracting gluon condensate from the average plaquette
Taekoon Lee
2015-03-27
The perturbative contribution in the average plaquette is subtracted using Borel summation and the remnant of the plaquette is shown to scale as a dim-4 condensate. A critical review is presented of the renormalon subtraction scheme that claimed a dim-2 condensate. The extracted gluon condensate is compared with the latest result employing high order (35-loop) calculation in the stochastic perturbation theory.
Structure of minimum-error quantum state discrimination
Joonwoo Bae
2013-07-19
Distinguishing different quantum states is a fundamental task having practical applications for information processing. Despite the efforts devoted so far, however, strategies for optimal discrimination are known only for specific examples. We here consider the problem of minimum-error quantum state discrimination where the average error is attempted to be minimized. We show the general structure of minimum-error state discrimination as well as useful properties to derive analytic solutions. Based on the general structure, we present a geometric formulation of the problem, which can be applied to cases where quantum state geometry is clear. We also introduce equivalent classes of sets of quantum states in terms of minimum-error discrimination: sets of quantum states in an equivalence class share the same guessing probability. In particular, for qubit states where the state geometry is found with the Bloch sphere, we illustrate that for an arbitrary set of qubit states, the minimum-error state discrimination with equal prior probabilities can be analytically solved, that is, optimal measurement and the guessing probability are explicitly obtained.
Averaged equilibrium and stability in low-aspect-ratio stellarators
Garcia, L.; Carreras, B.A.; Dominguez, N.
1989-01-01
The MHD equilibrium and stability calculations or stellarators are complex because of the intrinsic three-dimensional (3-D) character of these configurations. The stellarators expansion simplifies the equilibrium calculation by reducing it to a two-dimensional (2-D) problem. The classical stellarator expansion includes terms up to order epsilon/sup 2/, and the vacuum magnetic field is also included up to this order. For large-aspect-ratio configurations, the results of the stellarator expansion agree well with 3-D numerical equilibrium results. But for low-aspect-ratio configurations, these are significant discrepancies with 3-D equilibrium calculations. The main reason for these discrepancies is the approximation in the vacuum field contributions. This problem can be avoided by applying the average method in a vacuum flux coordinate system. In this way, the exact vacuum magnetic field contribution is included and the results agree well with 3-D equilibrium calculations even for low-aspect-ratio configurations. Using the average method in a vacuum flux coordinate system also permit the accurate calculation of local stability properties with the Mercier criterion. The main improvement is in the accurate calculation of the geodesic curvature term. In this paper, we discuss the application of the average method in flux coordinates to the calculation of the Mercier criterion for low-aspect-ratio stellarator configurations. 12 refs., 3 figs.
ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.
LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.
2004-07-26
We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore »rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. « less
Impact Ionization Model Using Average Energy and Average Square Energy of Distribution Function
Dunham, Scott
Impact Ionization Model Using Average Energy and Average Square Energy of Distribution Function Ken relaxation length, v sat ø h''i (¸ 0:05¯m), the energy distribution function is not well described calculation of impact ionization coefficient requires the use of a high energy distribution function because
Time-dependent angularly averaged inverse transport
Guillaume Bal; Alexandre Jollivet
2009-05-07
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.
Reynolds-Averaged Navier-Stokes
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration wouldMassR&D100Nationalquestionnaires 0serial codesReversingprovedReynolds-Averaged
Kernel Regression in the Presence of Correlated Errors Kernel Regression in the Presence in nonparametric regression is difficult in the presence of correlated errors. There exist a wide variety vector machines for regression. Keywords: nonparametric regression, correlated errors, bandwidth choice
Long-term average performance benefits of parabolic trough improvements
Gee, R.; Gaul, H.W.; Kearney, D.; Rabl, A.
1980-03-01
Improved parabolic trough concentrating collectors will result from better design, improved fabrication techniques, and the development and utilization of improved materials. The difficulty of achieving these improvements varies as does their potential for increasing parabolic trough performance. The purpose of this analysis is to quantify the relative merit of various technology advancements in improving the long-term average performance of parabolic trough concentrating collectors. The performance benefits of improvements are determined as a function of operating temperature for north-south, east-west, and polar mounted parabolic troughs. The results are presented graphically to allow a quick determination of the performance merits of particular improvements. Substantial annual energy gains are shown to be attainable. Of the improvements evaluated, the development of stable back-silvered glass reflective surfaces offers the largest performance gain for operating temperatures below 150/sup 0/C. Above 150/sup 0/C, the development of trough receivers that can maintain a vacuum is the most significant potential improvement. The reduction of concentrator slope errors also has a substantial performance benefit at high operating temperatures.
Giannakis, Georgios
, the insertion of pilot symbols reduces the information rate and, thus, reduces the utilization of the channel712 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 1, NO. 4, OCTOBER 2002 Average-Rate Optimal. Giannakis, Fellow, IEEE Abstract--Enabling linear minimum-mean square error (LMMSE)-based estimation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration would like submit the following commentsMethodsCompositional VariationCompressed
Energy efficiency of error correction for wireless communication
Havinga, Paul J.M.
-control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost
Fact #744: September 10, 2012 Average New Light Vehicle Price...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
4: September 10, 2012 Average New Light Vehicle Price Grows Faster than Average Used Light Vehicle Price Fact 744: September 10, 2012 Average New Light Vehicle Price Grows Faster...
Fact #849: December 1, 2014 Midsize Hybrid Cars Averaged 51%...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
cars are for gasoline cars only. Fuel economy average is the production-weighted harmonic mean. 2014 data are preliminary. Fact 849 Dataset Supporting Information Average...
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)
1993-01-01
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Office of Energy Efficiency and Renewable Energy (EERE)
In 2011 the average used light vehicle price was 36% higher than in 1990, while the average new light vehicle price was 67% higher than it was in 1990. The average price of a used vehicle had been...
The Average Mass Profile of Galaxy Clusters
R. G. Carlberg; H. K. C. Yee; E. Ellingson; S. L. Morris; R. Abraham; P. Gravel; C. J. Pritchet; T. Smecker-Hane; F. D. A. Hartwick; J. E. Hesser; J. B. Hutchings; J. B. Oke
1997-05-23
The average mass density profile measured in the CNOC cluster survey is well described with the analytic form rho(r)=A/[r(r+a_rho)^2], as advocated on the basis on n-body simulations by Navarro, Frenk & White. The predicted core radii are a_rho=0.20 (in units of the radius where the mean interior density is 200 times the critical density) for an Omega=0.2 open CDM model, or a_rho=0.26 for a flat Omega=0.2 model, with little dependence on other cosmological parameters for simulations normalized to the observed cluster abundance. The dynamically derived local mass-to-light ratio, which has little radial variation, converts the observed light profile to a mass profile. We find that the scale radius of the mass distribution, 0.20<= a_rho <= 0.30 (depending on modeling details, with a 95% confidence range of 0.12-0.50), is completely consistent with the predicted values. Moreover, the profiles and total masses of the clusters as individuals can be acceptably predicted from the cluster RMS line-of-sight velocity dispersion alone. This is strong support of the hierarchical clustering theory for the formation of galaxy clusters in a cool, collisionless, dark matter dominated universe.
the average weight of Connecticut River fish was considerably less (Table 1). The difference in the Connecticut River basin. Fisheries (Bethesda) 7(6): 2-11. POTTER. I. C.· F. W. H. BEAMISH, AND B. G. H. Freshwater fishes of Connecticut. State Geol. Nat. Hist. Servo Conn.· Dep. Environ. Prot., Bull. 101, 134 p
Countries Gasoline Prices Including Taxes
Gasoline and Diesel Fuel Update (EIA)
Crude oil, gasoline, heating oil, diesel, propane, and other liquids including biofuels and natural gas liquids. Natural Gas Exploration and reserves, storage, imports and...
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED
Kaplan, Alexey
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED ALTIMETRY MAPS be expressed in terms of the averages over model grid box areas. In reality, however, observations are either differently by the model grid and by the observational system. This difference turns out to be a major
Group representations, error bases and quantum codes
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
On a fatal error in tachyonic physics
Edward Kapu?cik
2013-08-10
A fatal error in the famous paper on tachyons by Gerald Feinberg is pointed out. The correct expressions for energy and momentum of tachyons are derived.
Adjoint Error Estimation for Elastohydrodynamic Lubrication
Jimack, Peter
Adjoint Error Estimation for Elastohydrodynamic Lubrication by Daniel Edward Hart Submitted elastohydro- dynamic lubrication (EHL) problems. A functional is introduced, namely the friction
Measure of Diffusion Model Error for Thermal Radiation Transport
Kumar, Akansha
2013-04-19
and computational time. However, this approximation often has significant error. Error due to the inherent nature of a physics model is called model error. Information about the model error associated with the diffusion approximation is clearly desirable...
WIPP Weatherization: Common Errors and Innovative Solutions Presentati...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
WIPP Weatherization: Common Errors and Innovative Solutions Presentation WIPP Weatherization: Common Errors and Innovative Solutions Presentation This presentation contains...
Inference for Model Error Allan Seheult
Oakley, Jeremy
Reservoirs, Model Error, Reification, Thermohaline Circulation. 1 Introduction Mathematical models of complex that the uncertainties associated with both calibrating a mathematical model to observations on a physical system specification exercise of model error with the cosmologists, linked to an extensive analysis of model
Nonparametric Regression with Correlated Errors Jean Opsomer
Wang, Yuedong
Nonparametric Regression with Correlated Errors Jean Opsomer Iowa State University Yuedong Wang Nonparametric regression techniques are often sensitive to the presence of correlation in the errors splines and wavelet regression under correlation, both for short-range and long-range dependence
Remarks on statistical errors in equivalent widths
Klaus Vollmann; Thomas Eversberg
2006-07-03
Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.
Characterizing Application Memory Error Vulnerability to
Mutlu, Onur
-reliability memory (HRM) Store error-tolerant data in less-reliable lower-cost memory Store error-vulnerable data an application Observation 2: Data can be recovered by software ·Heterogeneous-Reliability Memory (HRM: Data can be recovered by software ·Heterogeneous-Reliability Memory (HRM) ·Evaluation 4 #12;Server
Fact #671: April 18, 2011 Average Truck Speeds | Department of...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
2011 Average Truck Speeds The Federal Highway Administration studies traffic volume and flow on major truck routes by tracking more than 500,000 trucks. The average speed of trucks...
Fact #889: September 7, 2015 Average Diesel Price Lower than...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
9: September 7, 2015 Average Diesel Price Lower than Gasoline for the First Time in Six Years Fact 889: September 7, 2015 Average Diesel Price Lower than Gasoline for the First...
Fact #614: March 15, 2010 Average Age of Household Vehicles
Broader source: Energy.gov [DOE]
The average age of household vehicles has increased from 6.6 years in 1977 to 9.2 years in 2009. Pickup trucks have the oldest average age in every year listed. Sport utility vehicles (SUVs), first...
297 Copyright 2007 Psychonomic Society, Inc. Cross-task individual differences in error
Curran, Tim
, Arizona and christopher d'lauro and tiM curran University of Colorado, Boulder, Colorado The error, including the online detection and bias (positive learners; Frank, Woroch, & Curran, 2005). correction
On the evaluation of human error probabilities for post-initiating events
Presley, Mary R
2006-01-01
Quantification of human error probabilities (HEPs) for the purpose of human reliability assessment (HRA) is very complex. Because of this complexity, the state of the art includes a variety of HRA models, each with its own ...
Averaging top quark results in Run 2 M. Strovink
Strovink, Mark
average (cont'd) The pie chart shows the relative weights of the five input measurements in the world
Improving climate change detection through optimal seasonal averaging: the
Wirosoetisno, Djoko
Improving climate change detection through optimal seasonal averaging: the case of the North. (2015) Improving climate change detection through optimal seasonal averaging: the case of the North;Improving climate change detection through optimal seasonal averaging:1 the case of the North Atlantic jet
Engineering Grads Earn The Most Major Average Salary
Shahabi, Cyrus
Engineering Grads Earn The Most Table Major Average Salary Offer Petroleum Engineering $86/Aeronautical/Astronautical Engineering $57,231 Information Sciences & Systems $54,038 Source: Winter 2010 Salary Survey, National was the fourth most lucrative degree, with graduates starting at $61,205 on average. The average salary
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
Gillespie, Dirk
2013-10-01
An algorithm to approximately calculate the partition function (and subsequently ensemble averages) and density of states of lattice spin systems through non-Monte-Carlo random sampling is developed. This algorithm (called the sampling-the-mean algorithm) can be applied to models where the up or down spins at lattice nodes interact to change the spin states of other lattice nodes, especially non-Ising-like models with long-range interactions such as the biological model considered here. Because it is based on the Central Limit Theorem of probability, the sampling-the-mean algorithm also gives estimates of the error in the partition function, ensemble averages, and density of states. Easily implemented parallelization strategies and error minimizing sampling strategies are discussed. The sampling-the-mean method works especially well for relatively small systems, systems with a density of energy states that contains sharp spikes or oscillations, or systems with little a priori knowledge of the density of states.
Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012
Amhis, Y.; et al.
2012-07-01
This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.
Agility metric sensitivity using linear error theory
Smith, David Matthew
2000-01-01
Aircraft agility metrics have been proposed for use to measure the performance and capability of aircraft onboard while in-flight. The sensitivity of these metrics to various types of errors and uncertainties is not ...
Quantum Error Correction for Quantum Memories
Barbara M. Terhal
2015-04-10
Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.
Simulating Bosonic Baths with Error Bars
Mischa P. Woods; M. Cramer; M. B. Plenio
2015-04-07
We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths the truncation of both, the number of modes and the local Hilbert-space dimensions is necessary. We derive super-exponential Lieb--Robinson-type bounds on the error when restricting the bath to finitely-many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.
Errors and paradoxes in quantum mechanics
D. Rohrlich
2007-08-28
Errors and paradoxes in quantum mechanics, entry in the Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy, ed. F. Weinert, K. Hentschel, D. Greenberger and B. Falkenburg (Springer), to appear
Quantum error-correcting codes and devices
Gottesman, Daniel (Los Alamos, NM)
2000-10-03
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Organizational Errors: Directions for Future Research
Carroll, John Stephen
The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...
Quantifying truncation errors in effective field theory
R. J. Furnstahl; N. Klco; D. R. Phillips; S. Wesolowski
2015-06-03
Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Mei{\\ss}ner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered, and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian
2012-05-01
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
On the Fourier Transform Approach to Quantum Error Control
Hari Dilip Kumar
2012-08-24
Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.
The averaging process in permeability estimation from well-test data
Oliver, D.S. (Saudi Aramco (SA))
1990-09-01
Permeability estimates from the pressure derivative or the slope of the semilog plot usually are considered to be averages of some large ill-defined reservoir volume. This paper presents results of a study of the averaging process, including identification of the region of the reservoir that influences permeability estimates, and a specification of the relative contribution of the permeability of various regions to the estimate of average permeability. The diffusion equation for the pressure response of a well situated in an infinite reservoir where permeability is an arbitrary function of position was solved for the case of small variations from a mean value. Permeability estimates from the slope of the plot of pressure vs. the logarithm of drawdown time are shown to be weighted averages of the permeabilities within an inner and outer radius of investigation.
Orbit-averaged guiding-center Fokker-Planck operator
Brizard, A. J. [Department of Chemistry and Physics, Saint Michael's College, Colchester, Vermont 05439 (United States); Decker, J.; Peysson, Y.; Duthoit, F.-X. [CEA, IRFM, Saint-Paul-lez-Durance F-13108 (France)
2009-10-15
A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant {psi}, the minimum-B pitch-angle coordinate {xi}{sub 0}, and the momentum magnitude p.
"Table 2. Real Average Annual Coal Transportation Costs, By Primary...
U.S. Energy Information Administration (EIA) Indexed Site
Real Average Annual Coal Transportation Costs, By Primary Transport Mode and Supply Region" "(2013 dollars per ton)" "Coal Supply Region",2008,2009,2010,2011,2012,2013 "Railroad"...
LOW-HIGH VALUES FOR PETROLEUM AVERAGE INVENTORY RANGES (MILLION...
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
ENERGY INFORMATION ADMINISTRATION LOW-HIGH VALUES FOR PETROLEUM AVERAGE INVENTORY RANGES (MILLION BARRELS) FILE UPDATED April 2004 Line Month Low High Number Product Name Geography...
Hamlen, Kevin W.
Investigating SANS/CWE Top 25 Programming Errors. 1 Investigating the SANS/CWE Top 25 Programming Errors List Running Title: Investigating SANS/CWE Top 25 Programming Errors. Investigating the SANS;Investigating SANS/CWE Top 25 Programming Errors. 2 Investigating the SANS/CWE Top 25 Programming Errors List
Average balance equations, scale dependence, and energy cascade for granular materials
Riccardo Artoni; Patrick Richard
2015-03-09
A new averaging method linking discrete to continuum variables of granular materials is developed and used to derive average balance equations. Its novelty lies in the choice of the decomposition between mean values and fluctuations of properties which takes into account the effect of gradients. Thanks to a local homogeneity hypothesis, whose validity is discussed, simplified balance equations are obtained. This original approach solves the problem of dependence of some variables on the size of the averaging domain obtained in previous approaches which can lead to huge relative errors (several hundred percentages). It also clearly separates affine and nonaffine fields in the balance equations. The resulting energy cascade picture is discussed, with a particular focus on unidirectional steady and fully developed flows for which it appears that the contact terms are dissipated locally unlike the kinetic terms which contribute to a nonlocal balance. Application of the method is demonstrated in the determination of the macroscopic properties such as volume fraction, velocity, stress, and energy of a simple shear flow, where the discrete results are generated by means of discrete particle simulation.
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
The global warming signal is the average of
Jones, Peter JS
The global warming signal is the average of years 70-80 in the increasing CO2 run minus the average represent significant uncertainty in the global warming signal (Fig. 5). The differences at high latitudes, uncertainty in the isopycnal diffusivity causes uncertainty of up to 50% in the global warming signal
Morgantown Slightly Exceeds National Average for Cost of Living
Mohaghegh, Shahab
(an index value of 100 reflects the national average). The index expresses the cost of living, health care, and miscellaneous goods and services. The index is designed to reflect the cost of living Relative to National Average by Category In Figure 2, we illustrate how the cost of living index has
Error Analysis in Nuclear Density Functional Theory (Journal...
Office of Scientific and Technical Information (OSTI)
Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...
Error Analysis in Nuclear Density Functional Theory (Journal...
Office of Scientific and Technical Information (OSTI)
Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...
Yang, H; Wang, W; Hu, W; Chen, X; Wang, X; Yu, C [Taizhou Hospital, Wenzhou Medical College, Taizhou, Zhejiang (China)
2014-06-01
Purpose: To quantify setup errors by pretreatment kilovolt cone-beam computed tomography(KV-CBCT) scans for middle or distal esophageal carcinoma patients. Methods: Fifty-two consecutive middle or distal esophageal carcinoma patients who underwent IMRT were included this study. A planning CT scan using a big-bore CT simulator was performed in the treatment position and was used as the reference scan for image registration with CBCT. CBCT scans(On-Board Imaging v1. 5 system, Varian Medical Systems) were acquired daily during the first treatment week. A total of 260 CBCT scans was assessed with a registration clip box defined around the PTV-thorax in the reference scan based on(nine CBCTs per patient) bony anatomy using Offline Review software v10.0(Varian Medical Systems). The anterior-posterior(AP), left-right(LR), superiorinferior( SI) corrections were recorded. The systematic and random errors were calculated. The CTV-to-PTV margins in each CBCT frequency was based on the Van Herk formula (2.5?+0.7?). Results: The SD of systematic error (?) was 2.0mm, 2.3mm, 3.8mm in the AP, LR and SI directions, respectively. The average random error (?) was 1.6mm, 2.4mm, 4.1mm in the AP, LR and SI directions, respectively. The CTV-to-PTV safety margin was 6.1mm, 7.5mm, 12.3mm in the AP, LR and SI directions based on van Herk formula. Conclusion: Our data recommend the use of 6 mm, 8mm, and 12 mm for esophageal carcinoma patient setup in AP, LR, SI directions, respectively.
Kessler, Christoph
] (where a[n] = +infty). C's * bsearch() can't be used, it requires a[j]==key. */ int findloc( void *key CombineCRCW BSPQuicksort * variant by Gerbessiotis/Valiant JPDC 22(1994) * implemented in NestStepC. */ int N=10; // default value /** findloc(): find largest index j in [0..n1] with * a[j
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Lateral boundary errors in regional numerical weather
?umer, Slobodan
Lateral boundary errors in regional numerical weather prediction models Author: Ana Car Advisor, they describe evolution of atmospher - weather forecast. Every NWP model solves the same system of equations (1: assoc. prof. dr. Nedjeljka Zagar January 5, 2015 Abstract Regional models are used in many national
MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS
Heinemann, Detlev
commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head
Chinese Remaindering with Errors Oded Goldreich
International Association for Cryptologic Research (IACR)
Chinese Remaindering with Errors Oded Goldreich Department of Computer Science Weizmann Institute 02139, USA madhu@mit.edu. z Abstract The Chinese Remainder Theorem states that a positive integer m The Chinese Remainder Theorem states that a positive integer m is uniquely specified by its remainder modulo k
Reducing Biases in XBT Measurements by Including Discrete Information from Pressure Switches
Reducing Biases in XBT Measurements by Including Discrete Information from Pressure Switches MARLOS underway to improve XBT probes by including pressure switches. Information from these pressure measurements error parameters, and to optimize the use of pressure switches in terms of number of switches, optimal
Chow, R.; Doss, F.W.; Taylor, J.R.; Wong, J.N.
1999-07-02
Optical components needed for high-average-power lasers, such as those developed for Atomic Vapor Laser Isotope Separation (AVLIS), require high levels of performance and reliability. Over the past two decades, optical component requirements for this purpose have been optimized and performance and reliability have been demonstrated. Many of the optical components that are exposed to the high power laser light affect the quality of the beam as it is transported through the system. The specifications for these optics are described including a few parameters not previously reported and some component manufacturing and testing experience. Key words: High-average-power laser, coating efficiency, absorption, optical components
Fact #870: April 27, 2015 Corporate Average Fuel Economy Progress...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Progress, 1978-2014 The Corporate Average Fuel Economy (CAFE) is the sales-weighted harmonic mean fuel economy of a manufacturer's fleet of new cars or light trucks in a certain...
Fact #624: May 24, 2010 Corporate Average Fuel Economy Standards...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
by the fleet of each manufacturer will be determined by computing the sales-weighted harmonic average of the targets applicable to each of the manufacturer's passenger cars and...
On the Choice of Average Solar Zenith Angle
Cronin, Timothy W.
Idealized climate modeling studies often choose to neglect spatiotemporal variations in solar radiation, but doing so comes with an important decision about how to average solar radiation in space and time. Since both ...
Does anyone have access to 2012 average residential rates by...
Does anyone have access to 2012 average residential rates by utility company? I'm seeing an inconsistency between the OpenEI website and EIA 861 data set. Home > Groups > Utility...
INDIVIDUAL REFORM ELEMENTS .63Average course exam score
Colorado at Boulder, University of
INDIVIDUAL REFORM ELEMENTS .63Average course exam score .11In class clicker score .02Lecture: · Correlations with effort/curricular elements are positive but not high, indicating no individual course reform
Fact #889: September 7, 2015 Average Diesel Price Lower than...
Broader source: Energy.gov (indexed) [DOE]
Average Diesel Price Lower than Gasoline for the First Time in Six Years fotw889web.xlsx More Documents & Publications Fact 859 February 9, 2015 Excess Supply is the Most Recent...
Bounded Parameter Markov Decision Processes with Average Reward Criterion
Tewari, Ambuj
Bounded Parameter Markov Decision Processes with Average Reward Criterion Ambuj Tewari1 and Peter L, pp. 263277, 2007. c Springer-Verlag Berlin Heidelberg 2007 #12;264 A. Tewari and P.L. Bartlett
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Averaged null energy condition violation in a conformally flat spacetime
Urban, Douglas; Olum, Ken D.
2010-01-15
We show that the averaged null energy condition can be violated by a conformally coupled scalar field in a conformally flat spacetime in 3+1 dimensions. The violation is dependent on the quantum state and can be made as large as desired. It does not arise from the presence of anomalies, although anomalous violations are also possible. Since all geodesics in conformally flat spacetimes are achronal, the achronal averaged null energy condition is likewise violated.
Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.
Makarenkov, Vladimir
- mentaldatarequiresan efficientautomaticroutinefor theselection of hits. Unfortunately, random and systematic errors can
Method and system for modulation of gain suppression in high average power laser systems
Bayramian, Andrew James (Manteca, CA)
2012-07-31
A high average power laser system with modulated gain suppression includes an input aperture associated with a first laser beam extraction path and an output aperture associated with the first laser beam extraction path. The system also includes a pinhole creation laser having an optical output directed along a pinhole creation path and an absorbing material positioned along both the first laser beam extraction path and the pinhole creation path. The system further includes a mechanism operable to translate the absorbing material in a direction crossing the first laser beam extraction laser path and a controller operable to modulate the second laser beam.
Detecting Soft Errors in Stencil based Computations
Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.
2015-05-06
Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.
Error field penetration and locking to the backward propagating wave
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore »pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Gross error detection in process data
Singh, Gurmeet
1992-01-01
, 1991), with many optimum properties, seems to have been untapped by chemical engineers. We first review the background of the Tr test, and present relevant properties of the test. IV. A Hotelling's Generalization of Students t Test One of the most...: Chemical Engineering GROSS ERROR DETECTION IN PROCESS DATA A Thesis by GURMEET SINGH Approved as to style and content by: Ralph E. White (Chair of Committee) Michael Nikoloau (Member Richard B. Gri n (Member) R. W. Flummerfelt (Head...
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-25
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
High average power scaleable thin-disk laser
Beach, Raymond J. (Livermore, CA); Honea, Eric C. (Sunol, CA); Bibeau, Camille (Dublin, CA); Payne, Stephen A. (Castro Valley, CA); Powell, Howard (Livermore, CA); Krupke, William F. (Pleasanton, CA); Sutton, Steven B. (Manteca, CA)
2002-01-01
Using a thin disk laser gain element with an undoped cap layer enables the scaling of lasers to extremely high average output power values. Ordinarily, the power scaling of such thin disk lasers is limited by the deleterious effects of amplified spontaneous emission. By using an undoped cap layer diffusion bonded to the thin disk, the onset of amplified spontaneous emission does not occur as readily as if no cap layer is used, and much larger transverse thin disks can be effectively used as laser gain elements. This invention can be used as a high average power laser for material processing applications as well as for weapon and air defense applications.
Scheme for precise correction of orbit variation caused by dipole error field of insertion device
Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.
2005-05-15
We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.
The High Average Power Laser Program 15th HAPL meeting
, 2006 #12;2 The HAPL team is developing the science, technology and architecture needed for a laser1 The High Average Power Laser Program 15th HAPL meeting Aug 8 & 9, 2006 General Atomics Scientific Inst 16. Optiswitch Technology 17. ESLI Electricity Generator Electricity Generator Reaction
FOCI RESEARCH BENEFITS FISHERIES MANAGEMENT 1993 Recruitment Forecast -Average
Marine Fisheries Service (NMFS) advises the North Pacific Fisheries Management Council using a "stock data but addresses the autocorrelation of recruitment. In addition, it directly predicts recruitment to average 1991 year class, and a strong 1992 year class. In 1993 the transfer function model predicted
Parity-violating anomalies and the stationarity of stochastic averages
Reuter, M.
1988-01-15
Within the framework of stochastic quantization the parity-violating anomalies in odd space-time dimensions are derived from the asymptotic stationarity of the stochastic average of a certain fermion bilinear. Contrary to earlier attempts, this method yields the correct anomalies for both massive and massless fermions.
Probabilistic Wind Vector Forecasting Using Ensembles and Bayesian Model Averaging
Raftery, Adrian
Probabilistic Wind Vector Forecasting Using Ensembles and Bayesian Model Averaging J. MCLEAN 2011, in final form 26 May 2012) ABSTRACT Probabilistic forecasts of wind vectors are becoming critical with univariate quantities, statistical approaches to wind vector forecasting must be based on bivariate
Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging
Raftery, Adrian
Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging J. Mc in the context of wind power, where under- forecasting and overforecasting carry different financial penal- ties, calibrated and sharp probabilistic forecasts can help to make wind power a more financially competitive alter
Fact #693: September 19, 2011 Average Vehicle Footprint for Cars...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
information below. Supporting Information Average Vehicle Footprint, 2008-2010 Model Year Car Light Truck All Light Vehicles 2008 45.4 53.0 49.0 2009 45.2 52.7 48.2 2010 45.2 54.0...
Prediction in moving average processes Anton Schick and Wolfgang Wefelmeyer
Wefelmeyer, Wolfgang
Prediction in moving average processes Anton Schick and Wolfgang Wefelmeyer Abstract(y + (x1, . . . , xr)) dF(y) The research of A. Schick was partially supported by NSF Grant DMS0405791. 1 #12;2 ANTON SCHICK AND WOLFGANG WEFELMEYER can be estimated at the "parametric" root-n rate
Optimal Control with Weighted Average Costs and Temporal Logic Specifications
Murray, Richard M.
Optimal Control with Weighted Average Costs and Temporal Logic Specifications Eric M. Wolff Control and Dynamical Systems California Institute of Technology Pasadena, California 91125 Email: ewolff@caltech.edu Ufuk Topcu Control and Dynamical Systems California Institute of Technology Pasadena, California 91125
Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas
E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team
2010-05-19
The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E × B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.
New insights on numerical error in symplectic integration
Hugo Jiménez-Pérez; Jean-Pierre Vilotte; Barbara Romanowicz
2015-08-13
We implement and investigate the numerical properties of a new family of integrators connecting both variants of the symplectic Euler schemes, and including an alternative to the classical symplectic mid-point scheme, with some additional terms. This family is derived from a new method, introduced in a previous study, for generating symplectic integrators based on the concept of special symplectic manifold. The use of symplectic rotations and a particular type of projection keeps the whole procedure within the symplectic framework. We show that it is possible to define a set of parameters that control the additional terms providing a way of "tuning" these new symplectic schemes. We test the "tuned" symplectic integrators with the perturbed pendulum and we compare its behavior with an explicit scheme for perturbed systems. Remarkably, for the given examples, the error in the energy integral can be reduced considerably. There is a natural geometrical explanation, sketched at the end of this paper. This is the subject of a parallel article where a finer analysis is performed. Numerical results obtained in this paper open a new point of view on symplectic integrators and Hamiltonian error.
Decoherence and dephasing errors caused by the dc Stark effect...
Office of Scientific and Technical Information (OSTI)
Decoherence and dephasing errors caused by the dc Stark effect in rapid ion transport Citation Details In-Document Search Title: Decoherence and dephasing errors caused by the dc...
Human error contribution to nuclear materials-handling events
Sutton, Bradley (Bradley Jordan)
2007-01-01
This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...
Prices include compostable serviceware and linen tablecloths
California at Davis, University of
APPETIZERS Prices include compostable serviceware and linen tablecloths for the food tables.ucdavis.edu. BUTTERNUT SQUASH & BLACK BEAN ENCHILADAS #12;BUFFETS Prices include compostable serviceware and linen
Error Reduction for Weigh-In-Motion
Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T
2009-01-01
Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).
Forward Error Correction and Functional Programming
Bull, Tristan Michael
2011-04-25
.1 Annapolis Micro Wildstar 5 DDR2 DRAM Interface . . . . . . . . 50 6.2 Dual-Port DRAM Wrapper . . . . . . . . . . . . . . . . . . . . . 52 6.3 Kansas Lava DRAM Interface . . . . . . . . . . . . . . . . . . . . 55 7 Conclusion 58 7.1 Future Work... codewords. We ran the simulation using input data with energy per bit to noise power spectral density ratios (Eb=N0) of 3dB to 6dB in 0.5dB increments. For each Eb=N0 value, we ran the simulation until at least 25,000 bit errors were recorded. Results...
Unitary-process discrimination with error margin
T. Hashimoto; A. Hayashi; M. Hayashi; M. Horibe
2010-06-10
We investigate a discrimination scheme between unitary processes. By introducing a margin for the probability of erroneous guess, this scheme interpolates the two standard discrimination schemes: minimum-error and unambiguous discrimination. We present solutions for two cases. One is the case of two unitary processes with general prior probabilities. The other is the case with a group symmetry: the processes comprise a projective representation of a finite group. In the latter case, we found that unambiguous discrimination is a kind of "all or nothing": the maximum success probability is either 0 or 1. We also closely analyze how entanglement with an auxiliary system improves discrimination performance.
On the Error in QR Integration
Dieci, Luca; Van Vleck, Erik
2008-03-07
Society for Industrial and Applied Mathematics Vol. 46, No. 3, pp. 1166–1189 ON THE ERROR IN QR INTEGRATION? LUCA DIECI† AND ERIK S. VAN VLECK‡ Abstract. An important change of variables for a linear time varying system x? = A(t)x, t ? 0, is that induced...(X) is the matrix comprising the diagonal part of X, the rest being all 0’s; upp(X) is the matrix comprising the upper triangular part of X, the rest being all 0’s; and low(X) is the matrix comprising the strictly lower triangular part of X, the rest being all 0’s...
Comaskey, Brian J. (Walnut Creek, CA); Ault, Earl R. (Livermore, CA); Kuklo, Thomas C. (Oakdale, CA)
2005-07-05
A high average power, low optical distortion laser gain media is based on a flowing liquid media. A diode laser pumping device with tailored irradiance excites the laser active atom, ion or molecule within the liquid media. A laser active component of the liquid media exhibits energy storage times longer than or comparable to the thermal optical response time of the liquid. A circulation system that provides a closed loop for mixing and circulating the lasing liquid into and out of the optical cavity includes a pump, a diffuser, and a heat exchanger. A liquid flow gain cell includes flow straighteners and flow channel compression.
An Analysis of Air Passenger Average Trip Lengths and Fare Levels in US Domestic Markets
Huang, Sheng-Chen Alex
2000-01-01
California at Berkeley An Analysis of Air Passenger AverageCalifornia at Berkeley An Analysis of Air Passenger Average
Bolstered Error Estimation Ulisses Braga-Neto a,c
Braga-Neto, Ulisses
the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian
A Taxonomy of Number Entry Error Sarah Wiseman
Subramanian, Sriram
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
Anomalous transport and observable average in the standard map
Lydia Bouchara; Ouerdia Ourrad; Sandro Vaienti; Xavier Leoncini
2015-09-02
The distribution of finite time observable averages and transport in low dimensional Hamiltonian systems is studied. Finite time observable average distributions are computed, from which an exponent $\\alpha$ characteristic of how the maximum of the distributions scales with time is extracted. To link this exponent to transport properties, the characteristic exponent $\\mu(q)$ of the time evolution of the different moments of order $q$ related to transport are computed. As a testbed for our study the standard map is used. The stochasticity parameter $K$ is chosen so that either phase space is mixed with a chaotic sea and islands of stability or with only a chaotic sea. Our observations lead to a proposition of a law relating the slope in $q=0$ of the function $\\mu(q)$ with the exponent $\\alpha$.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Average Interpolating Wavelets on Point Clouds and Graphs
Rustamov, Raif M
2011-01-01
We introduce a new wavelet transform suitable for analyzing functions on point clouds and graphs. Our construction is based on a generalization of the average interpolating refinement scheme of Donoho. The most important ingredient of the original scheme that needs to be altered is the choice of the interpolant. Here, we define the interpolant as the minimizer of a smoothness functional, namely a generalization of the Laplacian energy, subject to the averaging constraints. In the continuous setting, we derive a formula for the optimal solution in terms of the poly-harmonic Green's function. The form of this solution is used to motivate our construction in the setting of graphs and point clouds. We highlight the empirical convergence of our refinement scheme and the potential applications of the resulting wavelet transform through experiments on a number of data stets.
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION (SHORT PAPER) Farinaz Koushanfar*
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION (SHORT PAPER) Farinaz Koushanfar* , Sasha Slijepcevic ESN tasks is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined ESN applications, including multi- modal sensor fusion, is to ensure that all of the techniques
Average dynamics of a finite set of coupled phase oscillators
Dima, Germán C. Mindlin, Gabriel B.
2014-06-15
We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.
Averaging cross section data so we can fit it
Brown, D.
2014-10-23
The ^{56}Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).
Integrating human related errors with technical errors to determine causes behind offshore accidents
Aamodt, Agnar
errors were embedded as an integral part of the oil well drilling opera- tion. To reduce the number assessment of the failure. The method is based on a knowledge model of the oil-well drilling process. All of non-productive time (NPT) during oil-well drilling. NPT exhibits a much lower declining trend than
In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors
Przasnyski, Zbigniew; Seal, Kala Chand
2011-01-01
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Quantum Error Correction with magnetic molecules
José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Ariño; Guillermo Mínguez Espallargas
2014-08-22
Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.
A. Frommer; K. Kahl; Th. Lippert; H. Rittich
2012-12-03
The Lanczos process constructs a sequence of orthonormal vectors v_m spanning a nested sequence of Krylov subspaces generated by a hermitian matrix A and some starting vector b. In this paper we show how to cheaply recover a secondary Lanczos process starting at an arbitrary Lanczos vector v_m. This secondary process is then used to efficiently obtain computable error estimates and error bounds for the Lanczos approximations to the action of a rational matrix function on a vector. This includes, as a special case, the Lanczos approximation to the solution of a linear system Ax = b. Our approach uses the relation between the Lanczos process and quadrature as developed by Golub and Meurant. It is different from methods known so far because of its use of the secondary Lanczos process. With our approach, it is now in particular possible to efficiently obtain {\\em upper bounds} for the error in the {\\em 2-norm}, provided a lower bound on the smallest eigenvalue of $A$ is known. This holds in particular for a large class of rational matrix functions including best rational approximations to the inverse square root and the sign function. We compare our approach to other existing error estimates and bounds known from the literature and include results of several numerical experiments.
Huang, Weidong
2011-01-01
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Tartakovsky, Daniel M.
National Laboratory Los Alamos, New Mexico Shlomo P. Neuman and Zhiming Lu Department of Hydrology, and the data are corrupted by experimental and interpretive errors. These errors and uncertainties render
Averaged null energy condition and quantum inequalities in curved spacetime
Eleni-Alexandra Kontou
2015-07-22
The Averaged Null Energy Condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. ANEC can be used to rule out spacetimes with exotic phenomena, such as closed timelike curves, superluminal travel and wormholes. We prove that ANEC is obeyed by a minimally-coupled, free quantum scalar field on any achronal null geodesic (not two points can be connected with a timelike curve) surrounded by a tubular neighborhood whose curvature is produced by a classical source. To prove ANEC we use a null-projected quantum inequality, which provides constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. Starting with a general result of Fewster and Smith, we first derive a timelike projected quantum inequality for a minimally-coupled scalar field on flat spacetime with a background potential. Using that result we proceed to find the bound of a quantum inequality on a geodesic in a spacetime with small curvature, working to first order in the Ricci tensor and its derivatives. The last step is to derive a bound for the null-projected quantum inequality on a general timelike path. Finally we use that result to prove achronal ANEC in spacetimes with small curvature.
Averaged null energy condition and quantum inequalities in curved spacetime
Kontou, Eleni-Alexandra
2015-01-01
The Averaged Null Energy Condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. ANEC can be used to rule out spacetimes with exotic phenomena, such as closed timelike curves, superluminal travel and wormholes. We prove that ANEC is obeyed by a minimally-coupled, free quantum scalar field on any achronal null geodesic (not two points can be connected with a timelike curve) surrounded by a tubular neighborhood whose curvature is produced by a classical source. To prove ANEC we use a null-projected quantum inequality, which provides constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. Starting with a general result of Fewster and Smith, we first derive a timelike projected quantum inequality for a minimally-coupled scalar field on flat spacetime with a background potential. Using that result we proceed to find the bound of a qu...
High average power laser using a transverse flowing liquid host
Ault, Earl R.; Comaskey, Brian J.; Kuklo, Thomas C.
2003-07-29
A laser includes an optical cavity. A diode laser pumping device is located within the optical cavity. An aprotic lasing liquid containing neodymium rare earth ions fills the optical cavity. A circulation system that provides a closed loop for circulating the aprotic lasing liquid into and out of the optical cavity includes a pump and a heat exchanger.
INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION
Office of Scientific and Technical Information (OSTI)
interval technical basis document Chiaro, P.J. Jr. 44 INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION DETECTORS; RADIATION MONITORS; DOSEMETERS;...
State discrimination with error margin and its locality
A. Hayashi; T. Hashimoto; M. Horibe
2008-07-10
There are two common settings in a quantum-state discrimination problem. One is minimum-error discrimination where a wrong guess (error) is allowed and the discrimination success probability is maximized. The other is unambiguous discrimination where errors are not allowed but the inconclusive result "I don't know" is possible. We investigate discrimination problem with a finite margin imposed on the error probability. The two common settings correspond to the error margins 1 and 0. For arbitrary error margin, we determine the optimal discrimination probability for two pure states with equal occurrence probabilities. We also consider the case where the states to be discriminated are multipartite, and show that the optimal discrimination probability can be achieved by local operations and classical communication.
Error models in quantum computation: an application of model selection
Lucia Schwarz; Steven van Enk
2013-09-04
Threshold theorems for fault-tolerant quantum computing assume that errors are of certain types. But how would one detect whether errors of the "wrong" type occur in one's experiment, especially if one does not even know what type of error to look for? The problem is that for many qubits a full state description is impossible to analyze, and a full process description is even more impossible to analyze. As a result, one simply cannot detect all types of errors. Here we show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. The example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size.
Average vertical and zonal F region plasma drifts over Jicamarca
Fejer, B.G.; Gonzalez, S.A. (Utah State Univ., Logan (United States)); de Paula, E.R. (Inst. de Pesquisas Espaciais-INPE, Sao Paulo (Brazil) Utah State Univ., Logan (United States)); Woodman, R.F. (Inst. Geofisico del Peru, Lima (Peru))
1991-08-01
The seasonal averages of the equatorial F region vertical and zonal plasma drifts are determined using extensive incoherent scatter radar observations from Jicamarca during 1968-1988. The late afternoon and nighttime vertical and zonal drifts are strongly dependent on the 10.7-cm solar flux. The authors show that the evening prereversal enhancement of vertical drifts increases linearly with solar flux during equinox but tends to saturate for large fluxes during southern hemisphere winter. They examine in detail, for the first time, the seasonal variation of the zonal plasma drifts and their dependence on solar flux and magnetic activity. The seasonal effects on the zonal drifts are most pronounced in the midnight-morning sector. The nighttime eastward drifts increase with solar flux for all seasons but decrease slightly with magnetic activity. The daytime westward drifts are essentially independent of season, solar cycle, and magnetic activity.
Average System Cost Methodology : Administrator's Record of Decision.
United States. Bonneville Power Administration.
1984-06-01
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
RESEARCH ARTICLE Minimization of divergence error in volumetric velocity
Marusic, Ivan
RESEARCH ARTICLE Minimization of divergence error in volumetric velocity measurements Volumetric velocity measurements taken in incompressible fluids are typically hindered by a nonzero
Ramanujam, J. "Ram"
- and a is the average number of transitions per clock phase heuristic for peak and average power cycle at the gate
Course may include: Research in Education
Course may include: Research in Education Statistics in Education Theories of Educational Admin Policy Analysis Sociological Aspects of Education Approaches to Literacy Development Information and Communication Technologies Issues in Education Final Project Seminar Master of Education Educational
Gas storage materials, including hydrogen storage materials
Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji
2014-11-25
A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.
Gas storage materials, including hydrogen storage materials
Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji
2013-02-19
A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Michalsky, Joseph; Hodges, G. B.
2014-08-22
We present here a simple retrieval of the areal-averaged and spectrally resolved surface albedo using only ground-based measurements of atmospheric transmission under fully overcast conditions. Our retrieval is based on a one-line equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties in the visible and near-infrared spectral range. The feasibility of our approach for the routine determinations of albedo is demonstrated for different landscapes with various degrees of heterogeneity using three sets of measurements:(1) spectrally resolved atmospheric transmission from Multi-Filter Rotating Shadowband Radiometer (MFRSR) at wavelength 415, 500, 615, 673, and 870 nm, (2) tower-based measurements of local surface albedo at the same wavelengths, and (3) areal-averaged surface albedo at four wavelengths (470, 560, 670 and 860 nm) from collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) observations. These integrated datasets cover both long (2008-2013) and short (April-May, 2010) periods at the ARM Southern Great Plains (SGP) site and the NOAA Table Mountain site, respectively. The calculated root mean square error (RMSE), which is defined here as the root mean squared difference between the MODIS-derived surface albedo and the retrieved area-averaged albedo, is quite small (RMSE?0.01) and comparable with that obtained previously by other investigators for the shortwave broadband albedo. Good agreement between the tower-based daily averages of surface albedo for the completely overcast and non-overcast conditions is also demonstrated. This agreement suggests that our retrieval originally developed for the overcast conditions likely will work for non-overcast conditions as well.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Michalsky, Joseph; Hodges, G. B.
2014-10-25
We introduce and evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone at five wavelengths (415, 500, 615, 673 and 870nm), under fully overcast conditions. Our retrieval is based on a one-line semi-analytical equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties, such as cloud optical depth and asymmetry parameter, in the visible and near-infrared spectral range. To illustrate the performance of our retrieval, we use as input measurements of spectral atmospheric transmission from Multi-Filter Rotating Shadowband Radiometer (MFRSR). These MFRSR data are collected at two well-established continental sites in the United States supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program and National Oceanic and Atmospheric Administration (NOAA). The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo. In particular, these comparisons are made at four MFRSR wavelengths (500, 615, 673 and 870nm) and for four seasons (winter, spring, summer and fall) at the ARM site using multi-year (2008-2013) MFRSR and MODIS data. Good agreement, on average, for these wavelengths results in small values (?0.01) of the corresponding root mean square errors (RMSEs) for these two sites. The obtained RMSEs are comparable with those obtained previously for the shortwave albedos (MODIS-derived versus tower-measured) for these sites during growing seasons. We also demonstrate good agreement between tower-based daily-averaged surface albedos measured for “nearby” overcast and non-overcast days. Thus, our retrieval originally developed for overcast conditions likely can be extended for non-overcast days by interpolating between overcast retrievals.
Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom
Benevides, Luis A. [Naval Sea Systems Command,1333 Isaac Hull Avenue, Washington Navy Yard, DC 20376 (United States); Hintenlang, David E. [University of Florida, 202 Nuclear Sciences Center, P.O. Box 1183, Gainesville Florida 32611 (United States)
2011-05-05
The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
High average power magnetic modulator for copper lasers
Cook, E.G.; Ball, D.G.; Birx, D.L.; Branum, J.D.; Peluso, S.E.; Langford, M.D.; Speer, R.D.; Sullivan, J.R.; Woods, P.G.
1991-06-14
Magnetic compression circuits show the promise of long life for operation at high average powers and high repetition rates. When the Atomic Vapor Laser Isotope Separation (AVLIS) Program at Lawrence Livermore National Laboratory needed new modulators to drive their higher power copper lasers in the Laser Demonstration Facility (LDF), existing technology using thyratron switched capacitor inversion circuits did not meet the goal for long lifetimes at the required power levels. We have demonstrated that magnetic compression circuits can achieve this goal. Improving thyratron lifetime is achieved by increasing the thyratron conduction time, thereby reducing the effect of cathode depletion. This paper describes a three stage magnetic modulator designed to provide a 60 kV pulse to a copper laser at a 4. 5 kHz repetition rate. This modulator operates at 34 kW input power and has exhibited MTBF of {approx}1000 hours when using thyratrons and even longer MTBFs with a series of stack of SCRs for the main switch. Within this paper, the electrical and mechanical designs for the magnetic compression circuits are discussed as are the important performance parameters of lifetime and jitter. Ancillary circuits such as the charge circuit and reset circuit are shown. 8 refs., 5 figs., 1 tab.
Mutual information, bit error rate and security in Wójcik's scheme
Zhanjun Zhang
2004-02-21
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.
Kernel Regression with Correlated Errors K. De Brabanter
Kernel Regression with Correlated Errors K. De Brabanter , J. De Brabanter , , J.A.K. Suykens B: It is a well-known problem that obtaining a correct bandwidth in nonparametric regression is difficult support vector machines for regression. Keywords: nonparametric regression, correlated errors, short
Ridge Regression Estimation Approach to Measurement Error Model
Shalabh
Ridge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton of the regression parameters is ill conditioned. We consider the Hoerl and Kennard type (1970) ridge regression (RR) modifications of the five quasi- empirical Bayes estimators of the regression parameters of a measurement error
Solving LWE problem with bounded errors in polynomial time
International Association for Cryptologic Research (IACR)
Solving LWE problem with bounded errors in polynomial time Jintai Ding1,2 Southern Chinese call the learning with bounded errors (LWBE) problems, we can solve it with complexity O(nD ). Keywords, this problem corresponds to the learning parity with noise (LPN) problem. There are several ways to solve
Fault-Tolerant Error Correction with the Gauge Color Code
Benjamin J. Brown; Naomi H. Nickerson; Dan E. Browne
2015-08-03
The gauge color code is a quantum error-correcting code with local syndrome measurements that, remarkably, admits a universal transversal gate set without the need for resource-intensive magic state distillation. A result of recent interest, proposed by Bomb\\'{i}n, shows that the subsystem structure of the gauge color code admits an error-correction protocol that achieves tolerance to noisy measurements without the need for repeated measurements, so called single-shot error correction. Here, we demonstrate the promise of single-shot error correction by designing a two-part decoder and investigate its performance. We simulate fault-tolerant error correction with the gauge color code by repeatedly applying our proposed error-correction protocol to deal with errors that occur continuously to the underlying physical qubits of the code over the duration that quantum information is stored. We estimate a sustainable error rate, i.e. the threshold for the long time limit, of $ \\sim 0.31\\%$ for a phenomenological noise model using a simple decoding algorithm.
Error detection through consistency checking Peng Gong* Lan Mu#
Silver, Whendee
Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers
Analysis of Probabilistic Error Checking Procedures on Storage Systems
Chen, Ing-Ray
Analysis of Probabilistic Error Checking Procedures on Storage Systems ING-RAY CHEN AND I.-LING YEN Email: irchen@iie.ncku.edu.tw Conventionally, error checking on storage systems is performed on-the-fly (with probability 1) as the storage system is being accessed in order to improve the reliability
ADJOINT AND DEFECT ERROR BOUNDING AND CORRECTION FOR FUNCTIONAL ESTIMATES
Pierce, Niles A.
decades. Integral functionals also arise in other aerospace areas such as the calculation of radar cross functional that results from residual errors in approximating the solution to the partial differential to handle flows with shocks; numerical experiments confirm 4th order error estimates for a pressure integral
Kinematic Error Correction for Minimally Invasive Surgical Robots
in two likely sources of kinematic error: port displacement and instrument shaft flexion. For a quasi. To reach the surgical site near the chest wall, the instrument shaft applies significant torque to the port, and the instrument shaft to bend. These kinematic errors impair positioning of the robot and cause deviations from
Biomarkers in disk-averaged near-UV to near-IR Earth spectra using Earthshine observations
Slim Hamdani; Luc Arnold; C. Foellmi; J. Berthier; M. Billeres; D. Briot; P. François; P. Riaud; J. Schneider
2006-09-07
We analyse the detectability of vegetation on a global scale on Earth's surface. Considering its specific reflectance spectrum showing a sharp edge around 700 nm, vegetation can be considered as a potential global biomarker. This work, based on observational data, aims to characterise and to quantify this signature in the disk-averaged Earth's spectrum. Earthshine spectra have been used to test the detectability of the "Vegetation Red Edge" (VRE) in the Earth spectrum. We obtained reflectance spectra from near UV (320 nm) to near IR (1020 nm) for different Earth phases (continents or oceans seen from the Moon) with EMMI on the NTT at ESO/La Silla, Chile. We accurately correct the sky background and take into account the phase-dependent colour of the Moon. VRE measurements require a correction of the ozone Chappuis absorption band and Rayleigh plus aerosol scattering. Results : The near-UV spectrum shows a dark Earth below 350 nm due to the ozone absorption. The Vegetation Red Edge is observed when forests are present (4.0% for Africa and Europe), and is lower when clouds and oceans are mainly visible (1.3% for the Pacific Ocean). Errors are typically $\\pm0.5$, and $\\pm1.5$ in the worst case. We discuss the different sources of errors and bias and suggest possible improvements. We showed that measuring the VRE or an analog on an Earth-like planet remains very difficult (photometric relative accuracy of 1% or better). It remains a small feature compared to atmospheric absorption lines. A direct monitoring from space of the global (disk-averaged) Earth's spectrum would provide the best VRE follow-up.
Grid-scale Fluctuations and Forecast Error in Wind Power
G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi
2015-03-29
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Grid-scale Fluctuations and Forecast Error in Wind Power
Bel, G; Toots, M; Bandi, M M
2015-01-01
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Using error correction to determine the noise model
M. Laforest; D. Simon; J. -C. Boileau; J. Baugh; M. Ditty; R. Laflamme
2007-01-25
Quantum error correcting codes have been shown to have the ability of making quantum information resilient against noise. Here we show that we can use quantum error correcting codes as diagnostics to characterise noise. The experiment is based on a three-bit quantum error correcting code carried out on a three-qubit nuclear magnetic resonance (NMR) quantum information processor. Utilizing both engineered and natural noise, the degree of correlations present in the noise affecting a two-qubit subsystem was determined. We measured a correlation factor of c=0.5+/-0.2 using the error correction protocol, and c=0.3+/-0.2 using a standard NMR technique based on coherence pathway selection. Although the error correction method demands precise control, the results demonstrate that the required precision is achievable in the liquid-state NMR setting.
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq
2010-01-01
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...
Scramjet including integrated inlet and combustor
Kutschenreuter, P.H. Jr.; Blanton, J.C.
1992-02-04
This patent describes a scramjet engine. It comprises: a first surface including an aft facing step; a cowl including: a leading edge and a trailing edge; an upper surface and a lower surface extending between the leading edge and the trailing edge; the cowl upper surface being spaced from and generally parallel to the first surface to define an integrated inlet-combustor therebetween having an inlet for receiving and channeling into the inlet-combustor supersonic inlet airflow; means for injecting fuel into the inlet-combustor at the step for mixing with the supersonic inlet airflow for generating supersonic combustion gases; and further including a spaced pari of sidewalls extending between the first surface to the cowl upper surface and wherein the integrated inlet-combustor is generally rectangular and defined by the sidewall pair, the first surface and the cowl upper surface.
Electric Power Monthly, August 1990. [Glossary included
Not Available
1990-11-29
The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.
MOTIVATION INCLUDED OR EXCLUDED FROM Mihaela Cocea
Cocea, Mihaela
MOTIVATION Â INCLUDED OR EXCLUDED FROM E-LEARNING Mihaela Cocea National College of Ireland Mayor, Dublin 1, Ireland sweibelzahl@ncirl.ie ABSTRACT The learners' motivation has an impact on the quality-Learning, motivation has been mainly considered in terms of instructional design. Research in this direction suggests
Energy Consumption of Personal Computing Including Portable
Namboodiri, Vinod
processing unit (CPU) processing power and capacity of mass storage devices doubles every 18 months. Such growth in both processing and storage capabilities fuels the production of ever more powerful portableEnergy Consumption of Personal Computing Including Portable Communication Devices Pavel Somavat1
Course may include: Research in Education
Development Information and Communication Technologies Issues in Education Final Project Seminar Master, the Final Project Seminar. This graduate program will allow you to develop your skills and knowledgeCourse may include: Research in Education Qualitative Methods in Educational Research Fundamentals
Communication in automation, including networking and wireless
Antsaklis, Panos
Communication in automation, including networking and wireless Nicholas Kottenstette and Panos J and networking in automation is given. Digital communication fundamentals are reviewed and networked control are presented. 1 Introduction 1.1 Why communication is necessary in automated systems Automated systems use
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
A decision support system prototype including human factors based on the TOGA meta-theory approach
Cappelli, M.; Memmi, F.; Gadomski, A. M.; Sepielli, M.
2012-07-01
The human contribution to the risk of operation of complex technological systems is often not negligible and sometimes tends to become significant, as shown by many reports on incidents and accidents occurred in the past inside Nuclear Power Plants (NPPs). An error of a human operator of a NPP can derive by both omission and commission. For instance, complex commission errors can also lead to significant catastrophic technological accidents, as for the case of the Three Mile Island accident. Typically, the problem is analyzed by focusing on the single event chain that has provoked the incident or accident. What is needed is a general framework able to include as many parameters as possible, i.e. both technological and human factors. Such a general model could allow to envisage an omission or commission error before it can happen or, alternatively, suggest preferred actions to do in order to take countermeasures to neutralize the effect of the error before it becomes critical. In this paper, a preliminary Decision Support System (DSS) based on the so-called (-) TOGA meta-theory approach is presented. The application of such a theory to the management of nuclear power plants has been presented in the previous ICAPP 2011. Here, a human factor simulator prototype is proposed in order to include the effect of human errors in the decision path. The DSS has been developed using a TRIGA research reactor as reference plant, and implemented using the LabVIEW programming environment and the Finite State Machine (FSM) model The proposed DSS shows how to apply the Universal Reasoning Paradigm (URP) and the Universal Management Paradigm (UMP) to a real plant context. The DSS receives inputs from instrumentation data and gives as output a suggested decision. It is obtained as the result of an internal elaborating process based on a performance function. The latter, describes the degree of satisfaction and efficiency, which are dependent on the level of responsibility related to each professional role. As an application, we present the simulation of the discussed error, e.g. the unchecked extraction of the control rods during a power variation maneuver and we show how the effect of human errors can affect the performance function, giving rise to different countermeasures which could call different operator figures into play, potentially not envisaged in the standard procedure. (authors)
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Catastrophic photometric redshift errors: Weak-lensing survey requirements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymore »the photometric redshifts with z ? 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs – zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-01-13
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Subterranean barriers including at least one weld
Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.
2007-01-09
A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.
Photoactive devices including porphyrinoids with coordinating additives
Forrest, Stephen R; Zimmerman, Jeramy; Yu, Eric K; Thompson, Mark E; Trinh, Cong; Whited, Matthew; Diev, Vlacheslav
2015-05-12
Coordinating additives are included in porphyrinoid-based materials to promote intermolecular organization and improve one or more photoelectric characteristics of the materials. The coordinating additives are selected from fullerene compounds and organic compounds having free electron pairs. Combinations of different coordinating additives can be used to tailor the characteristic properties of such porphyrinoid-based materials, including porphyrin oligomers. Bidentate ligands are one type of coordinating additive that can form coordination bonds with a central metal ion of two different porphyrinoid compounds to promote porphyrinoid alignment and/or pi-stacking. The coordinating additives can shift the absorption spectrum of a photoactive material toward higher wavelengths, increase the external quantum efficiency of the material, or both.
Power generation method including membrane separation
Lokhandwala, Kaaeid A. (Union City, CA)
2000-01-01
A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.
Rotor assembly including superconducting magnetic coil
Snitchler, Gregory L. (Shrewsbury, MA); Gamble, Bruce B. (Wellesley, MA); Voccio, John P. (Somerville, MA)
2003-01-01
Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.
Nuclear reactor shield including magnesium oxide
Rouse, Carl A. (Del Mar, CA); Simnad, Massoud T. (La Jolla, CA)
1981-01-01
An improvement in nuclear reactor shielding of a type used in reactor applications involving significant amounts of fast neutron flux, the reactor shielding including means providing structural support, neutron moderator material, neutron absorber material and other components as described below, wherein at least a portion of the neutron moderator material is magnesium in the form of magnesium oxide either alone or in combination with other moderator materials such as graphite and iron.
Electric power monthly, September 1990. [Glossary included
Not Available
1990-12-17
The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)
Doolan, P [University College London, London (United Kingdom); Massachusetts General Hospital, Boston, MA (United States); Dias, M [Massachusetts General Hospital, Boston, MA (United States); Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Collins Fekete, C [Massachusetts General Hospital, Boston, MA (United States); Departement de physique, de genie physique et d'optique et Centre de recherche sur le cancer, Universite Laval, Quebec (Canada); Seco, J [Massachusetts General Hospital, Boston, MA (United States)
2014-06-01
Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Pendulum Shifts, Context, Error, and Personal Accountability
Harold Blackman; Oren Hester
2011-09-01
This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.
Average Stumpage Prices Measured in Price per Ton for Forest Products Large Pine Sawtimber Small Pine Sawtimber Hardwood Sawtimber Year Unweighte d Average Prices Weighted Average Prices Average of Unweighted and Weighted Prices Unweighted Average Prices Weighted Average Prices Average of Unweighted
Average Stumpage Prices Measured in Price per Ton for Forest Products Large Pine Sawtimber Small Pine Sawtimber Hardwood Sawtimber Year Unweighted Average Prices Weighted Average Prices Average of Unweighted and Weighted Prices Unweighted Average Prices Weighted Average Prices Average of Unweighted
Average Stumpage Prices Measured in Price per Ton for Forest Products Large Pine Sawtimber Small Pine Sawtimber Hardwood Sawtimber Year Unweighted Average Prices Weighted Average Prices Simple average of Unweighted and Weighted Prices Unweighted Average Prices Weighted Average Prices Simple average of Unweighted
Honest Confidence Intervals for the Error Variance in Stepwise Regression
Stine, Robert A.
Honest Confidence Intervals for the Error Variance in Stepwise Regression Dean P. Foster and Robert alternatives are used. These simpler algorithms (e.g., forward or backward stepwise regression) obtain
Servo control booster system for minimizing following error
Wise, William L. (Mountain View, CA)
1985-01-01
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola
Oregon, University of
of the pyranometer to briefly shade the pyranometer once a minute. Direct hori- zontal irradiance is calculated used in programs evaluating the performance of photovoltaic systems, and systematic errors in the data
Error estimation and adaptive mesh refinement for aerodynamic flows
Hartmann, Ralf
Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann1 and Paul Houston, 38108 Braunschweig, Germany Ralf.Hartmann@dlr.de 2 School of Mathematical Sciences University
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS
Hartmann, Ralf
MULTIÂTARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN of Scientific Computing, TU Braunschweig, Germany (Ralf.Hartmann@dlr.de). 1 #12; 2 R. HARTMANN
Error estimation and adaptive mesh refinement for aerodynamic flows
Hartmann, Ralf
Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held), Lilien- thalplatz 7, 38108 Braunschweig, Germany, e-mail: Ralf.Hartmann@dlr.de 1 #12;2 Ralf Hartmann
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS
Hartmann, Ralf
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract, Germany (Ralf.Hartmann@dlr.de). 1 #12;2 R. HARTMANN quantity under consideration. However, in many
Inflated applicants: Attribution errors in performance evaluation by professionals
Swift, Samuel; Moore, Don; Sharek, Zachariah; Gino, Francesca
2013-01-01
performance among applicants from each ‘‘type’’ of school.and interview performance. Each school provided multi-yearschool, PLOS ONE | www.plosone.org July 2013 | Volume 8 | Issue 7 | e69258 Attribution Errors in Performance
Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint
Hodge, B. M.; Milligan, M.
2011-03-01
In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.
On Student's 1908 Article "The Probable Error of a Mean"
Kim, Jong-Min
's "attention" resulted in a report, "The Application of the `Law of Error' to the work of the Brewery" dated No] and other records available in their Dublin brewery"; see Pearson 1939, p. 213.) Unable to find
Performance optimizations for compiler-based error detection
Mitropoulou, Konstantina
2015-06-29
The trend towards smaller transistor technologies and lower operating voltages stresses the hardware and makes transistors more susceptible to transient errors. In future systems, performance and power gains will come ...
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19
Many statistical models, like measurement error models, a general class of survival models, and a mixture data model with random censoring, are semiparametric where interest lies in estimating finite-dimensional parameters ...
Error bars for linear and nonlinear neural network regression models
Penny, Will
Error bars for linear and nonlinear neural network regression models William D. Penny and Stephen J College of Science, Technology and Medicine, London SW7 2BT., U.K. w.penny@ic.ac.uk, s
NOVELTY, CONFIDENCE & ERRORS IN CONNECTIONIST Stephen J. Roberts & William Penny
Roberts, Stephen
d NOVELTY, CONFIDENCE & ERRORS IN CONNECTIONIST SYSTEMS Stephen J. Roberts & William Penny Neural, Technology & Medicine London, UK s.j.roberts@ic.ac.uk, w.penny@ic.ac.uk April 21, 1997 Abstract Key words
Predicting Intentional Tax Error Using Open Source Literature and Data
for each PUMS respondent (or agent), in certain line item/taxpayer categories, allowing us to construct dis-Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Results of Meta-Analysis 12 6 Intentional Error in Line Items/Taxpayer Categories 13 6
Antonio Enea Romano
2007-01-27
We show that positive averaged acceleration obtained in LTB models through spatial averaging can require integration over a region beyond the event horizon of the central observer. We provide an example of a LTB model with positive averaged acceleration in which the luminosity distance does not contain information about the entire spatially averaged region, making the averaged acceleration unobservable. Since the cosmic acceleration is obtained from fitting the observed luminosity distance to a FRW model we conclude that in general a positive averaged acceleration in LTB models does not imply a positive FRW cosmic acceleration.
Romano, A E
2006-01-01
We show that positive averaged acceleration obtained in LTB models through spatial averaging can require integration over a region beyond the event horizon of the central observer. We provide an example of a LTB model with positive averaged acceleration in which the luminosity distance does not contain information about the entire spatially averaged region, making the averaged acceleration unobservable. Since the cosmic acceleration is obtained from fitting the observed luminosity distance to a FRW model we conclude that in general a positive averaged acceleration in LTB models does not imply a positive FRW cosmic acceleration.
ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE
Hankins, Timothy H. [Physics Department, New Mexico Tech, Socorro, NM 87801 (United States); Rankin, Joanna M. [Physics Department, University of Vermont, Burlington, VT 05401 (United States)], E-mail: thankins@nrao.edu, E-mail: Joanna.Rankin@uvm.edu
2010-01-15
We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.
Angular Averaged Profiling of the Radial Electric Field in Compensated FTICR Cells
Tolmachev, Aleksey V.; Robinson, Errol W.; Wu, Si; Smith, Richard D.; Futrell, Jean H.; Pasa-Tolic, Ljiljana
2012-05-08
A recent publication from this laboratory (1) reported a theoretical analysis comparing approaches for creating harmonic ICR cells. We considered two examples of static segmented cells - namely, a seven segment cell developed in this laboratory (2) and one described by Rempel et al (3), along with a recently described dynamically harmonized cell (4). This conceptual design for a dynamically harmonized cell has now been reduced to practice and first experimental results obtained with this cell were recently reported in this journal (5). This publication reports details of cell construction and describes its performance in a 7 Tesla Fourier Transform mass spectrometer. Herein, we describe the extension of theoretical analysis presented in (1) to include angular-averaged radial electric field calculations and a discussion of the influence of trapping plates.
Suboptimal quantum-error-correcting procedure based on semidefinite programming
Naoki Yamamoto; Shinji Hara; Koji Tsumura
2006-06-13
In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.
TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit
Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan
Kaeli, David R.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction
A simple real-word error detection and correction using local word bigram and trigram
A simple real-word error detection and correction using local word bigram and trigram Pratip bbcisical@gmail.com Abstract Spelling error is broadly classified in two categories namely non word error and real word error. In this paper a localized real word error detection and correction method is proposed
Compiler-Assisted Detection of Transient Memory Errors
Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-06-09
The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.
Optical panel system including stackable waveguides
DeSanto, Leonard (Dunkirk, MD); Veligdan, James T. (Manorville, NY)
2007-11-20
An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.
Thermovoltaic semiconductor device including a plasma filter
Baldasaro, Paul F. (Clifton Park, NY)
1999-01-01
A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.
Simple Model of Membrane Proteins Including Solvent
D. L. Pagan; A. Shiryayev; T. P. Connor; J. D. Gunton
2006-03-04
We report a numerical simulation for the phase diagram of a simple two dimensional model, similar to one proposed by Noro and Frenkel [J. Chem. Phys. \\textbf{114}, 2477 (2001)] for membrane proteins, but one that includes the role of the solvent. We first use Gibbs ensemble Monte Caro simulations to determine the phase behavior of particles interacting via a square-well potential in two dimensions for various values of the interaction range. A phenomenological model for the solute-solvent interactions is then studied to understand how the fluid-fluid coexistence curve is modified by solute-solvent interactions. It is shown that such a model can yield systems with liquid-liquid phase separation curves that have both upper and lower critical points, as well as closed loop phase diagrams, as is the case with the corresponding three dimensional model.
Calculation of the Johann error for spherically bent x-ray imaging crystal spectrometers
Wang, E.; Beiersdorfer, P.; Gu, M.; Bitter, M.; Delgado-Aparicio, L.; Hill, K. W.; Reinke, M.; Rice, J. E.; Podpaly, Y.
2010-10-15
New x-ray imaging crystal spectrometers, currently operating on Alcator C-Mod, NSTX, EAST, and KSTAR, record spectral lines of highly charged ions, such as Ar{sup 16+}, from multiple sightlines to obtain profiles of ion temperature and of toroidal plasma rotation velocity from Doppler measurements. In the present work, we describe a new data analysis routine, which accounts for the specific geometry of the sightlines of a curved-crystal spectrometer and includes corrections for the Johann error to facilitate the tomographic inversion. Such corrections are important to distinguish velocity induced Doppler shifts from instrumental line shifts caused by the Johann error. The importance of this correction is demonstrated using data from Alcator C-Mod.
Fact #794: August 26, 2013 How Much Does an Average Vehicle Owner...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Each Year? Fact 794: August 26, 2013 How Much Does an Average Vehicle Owner Pay in Fuel Taxes Each Year? According to the Federal Highway Administration, the average fuel economy...
Polikar, Robi
Model comparison for automatic characterization and classification of average ERPs using visual December 2008 Keywords: EEG ERP Attention P300 N200 Oddball Pattern recognition Linear discriminant responses from averaged event-related potentials (ERPs) along with identifying appropriate features
Fact #638: August 30, 2010 Average Expenditure for a New Car...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
8: August 30, 2010 Average Expenditure for a New Car Declines in Relation to Family Earnings Fact 638: August 30, 2010 Average Expenditure for a New Car Declines in Relation to...
Fact #715: February 20, 2012 The Average Age of Light Vehicles Continues to Rise
Broader source: Energy.gov [DOE]
The average age for cars and light trucks continues to rise as consumers hold onto their vehicles longer. Between 1995 and 2011, the average age for cars increased by 32% from 8.4 years to 11.1...
Engine lubrication circuit including two pumps
Lane, William H.
2006-10-03
A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.
TIME-AVERAGING IN THE MARINE FOSSIL RECORD: OVERVIEW OF STRATEGIES AND
, PALEOECOLOGY, BENTHIC, MARINE, TIME-AVERAGING. Rl~SUM]~ - Le raisonnement pal~ontologique qui a conduit ~ la
A structural analysis of vehicle design responses to Corporate Average Fuel Economy policy
Michalek, Jeremy J.
A structural analysis of vehicle design responses to Corporate Average Fuel Economy policy Ching 2009 Accepted 29 August 2009 Keywords: Corporate Average Fuel Economy Energy policy Oligopolistic market Game theory Vehicle design a b s t r a c t The US Corporate Average Fuel Economy (CAFE
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors
Sarkar, Abhra
2014-06-24
Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with errors-in-covariates are available ...
Estimation of the error for small-sample optimal binary filter design using prior knowledge
Sabbagh, David L
1999-01-01
Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...
Fault tree analysis of commonly occurring medication errors and methods to reduce them
Cherian, Sandhya Mary
1994-01-01
-depth analysis of over two hundred actual medication error incidents. These errors were then classified according to type, in an attempt at deriving a generalized fault tree for the medication delivery system that contributed to errors. This generalized fault...
EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITY OF DIPOLE MAGNETS FOR THE SSC
Meuser, R.B.
2010-01-01
in Fig. 2. Table 2. Manufacturing Error Mode Groups13-16, 1985 EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITYMag Note-27 EFFECT OF MANUFACTURING ERRORS ON FIELO QUALITY
Economic penalties of problems and errors in solar energy systems
Raman, K.; Sparkes, H.R.
1983-01-01
Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.
Goal-oriendted local a posteriori error estimator for H(div)
2011-12-15
Dec 15, 2011 ... error estimator measures the pollution effect from the outside region of D ... error estimators which account for and quantify the pollution effect.
V-228: RealPlayer Buffer Overflow and Memory Corruption Error...
Broader source: Energy.gov (indexed) [DOE]
a memory corruption error and execute arbitrary code on the target system. IMPACT: Access control error SOLUTION: vendor recommends upgrading to version 16.0.3.51 Addthis...
Clark, E.L.
1993-08-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
Gering, Kevin L.; Harrup, Mason K.; Rollins, Harry W.
2015-12-08
An ionic liquid including a phosphazene compound that has a plurality of phosphorus-nitrogen units and at least one pendant group bonded to each phosphorus atom of the plurality of phosphorus-nitrogen units. One pendant group of the at least one pendant group comprises a positively charged pendant group. Additional embodiments of ionic liquids are disclosed, as are electrolyte solutions and energy storage devices including the embodiments of the ionic liquid.
Flowmeter for determining average rate of flow of liquid in a conduit
Kennerly, John M. (Knoxville, TN); Lindner, Gordon M. (Oak Ridge, TN); Rowe, John C. (Oak Ridge, TN)
1982-01-01
This invention is a compact, precise, and relatively simple device for use in determining the average rate of flow of a liquid through a conduit. The liquid may be turbulent and contain bubbles of gas. In a preferred embodiment, the flowmeter includes an electrical circuit and a flow vessel which is connected as a segment of the conduit conveying the liquid. The vessel is provided with a valved outlet and is partitioned by a vertical baffle into coaxial chambers whose upper regions are vented to permit the escape of gas. The inner chamber receives turbulent downflowing liquid from the conduit and is sized to operate at a lower pressure than the conduit, thus promoting evolution of gas from the liquid. Lower zones of the two chambers are interconnected so that the downflowing liquid establishes liquid levels in both chambers. The liquid level in the outer chamber is comparatively calm, being to a large extent isolated from the turbulence in the inner chamber once the liquid in the outer chamber has risen above the liquid-introduction zone for that chamber. Lower and upper probes are provided in the outer chamber for sensing the liquid level therein at points above its liquid-introduction zone. An electrical circuit is connected to the probes to display the time required for the liquid level in the outer chamber to successively contact the lower and upper probes. The average rate of flow through the conduit can be determined from the above-mentioned time and the vessel volume filled by the liquid during that time.
Flowmeter for determining average rate of flow of liquid in a conduit
Kennerly, J.M.; Lindner, G.M.; Rowe, J.C.
1981-04-30
This invention is a compact, precise, and relatively simple device for use in determining the average rate of flow of a liquid through a conduit. The liquid may be turbulent and contain bubbles of gas. In a preferred embodiment, the flowmeter includes an electrical circuit and a flow vessel which is connected as a segment of the conduit conveying the liquid. The vessel is provided with a valved outlet and is partitioned by a vertical baffle into coaxial chambers whose upper regions are vented to permit the escape of gas. The inner chamber receives turbulent downflowing liquid from the conduit and is sized to operate at a lower pressure than the conduit, thus promoting evolution of gas from the liquid. Lower zones of the two chambers are interconnected so that the downflowing liquid establishes liquid levels in both chambers. The liquid level in the outer chamber is comparatively calm, being to a large extent isolated from the turbulence in the inner chamber once the liquid in the outer chamber has risen above the liquid-introduction zone for that chamber. Lower and upper probes are provided in the outer chamber for sensing the liquid level therein at points above its liquid-introduction zone. An electrical circuit is connected to the probes to display the time required for the liquid level in the outer chamber to successively contact the lower and upper probes. The average rate of flow through the conduit can be determined from the above-mentioned time and the vessel volume filled by the liquid during that time.
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
Reducing collective quantum state rotation errors with reversible dephasing
Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.
2014-12-29
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21?dB in the context of collective population measurements of the spin states of an ensemble of 2.1×10{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Characterization of quantum dynamics using quantum error correction
S. Omkar; R. Srikanth; S. Banerjee
2015-01-27
Characterizing noisy quantum processes is important to quantum computation and communication (QCC), since quantum systems are generally open. To date, all methods of characterization of quantum dynamics (CQD), typically implemented by quantum process tomography, are \\textit{off-line}, i.e., QCC and CQD are not concurrent, as they require distinct state preparations. Here we introduce a method, "quantum error correction based characterization of dynamics", in which the initial state is any element from the code space of a quantum error correcting code that can protect the state from arbitrary errors acting on the subsystem subjected to the unknown dynamics. The statistics of stabilizer measurements, with possible unitary pre-processing operations, are used to characterize the noise, while the observed syndrome can be used to correct the noisy state. Our method requires at most $2(4^n-1)$ configurations to characterize arbitrary noise acting on $n$ qubits.
Factorization of correspondence and camera error for unconstrained dense correspondence applications
Knoblauch, D; Hess-Flores, M; Duchaineau, M; Kuester, F
2009-09-29
A correspondence and camera error analysis for dense correspondence applications such as structure from motion is introduced. This provides error introspection, opening up the possibility of adaptively and progressively applying more expensive correspondence and camera parameter estimation methods to reduce these errors. The presented algorithm evaluates the given correspondences and camera parameters based on an error generated through simple triangulation. This triangulation is based on the given dense, non-epipolar constraint, correspondences and estimated camera parameters. This provides an error map without requiring any information about the perfect solution or making assumptions about the scene. The resulting error is a combination of correspondence and camera parameter errors. An simple, fast low/high pass filter error factorization is introduced, allowing for the separation of correspondence error and camera error. Further analysis of the resulting error maps is applied to allow efficient iterative improvement of correspondences and cameras.
Full protection of superconducting qubit systems from coupling errors
M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley
2005-08-09
Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.
When soft controls get slippery: User interfaces and human error
Stubler, W.F.; O`Hara, J.M.
1998-12-01
Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety.
Comment on "Optimum Quantum Error Recovery using Semidefinite Programming"
M. Reimpell; R. F. Werner; K. Audenaert
2006-06-07
In a recent paper ([1]=quant-ph/0606035) it is shown how the optimal recovery operation in an error correction scheme can be considered as a semidefinite program. As a possible future improvement it is noted that still better error correction might be obtained by optimizing the encoding as well. In this note we present the result of such an improvement, specifically for the four-bit correction of an amplitude damping channel considered in [1]. We get a strict improvement for almost all values of the damping parameter. The method (and the computer code) is taken from our earlier study of such correction schemes (quant-ph/0307138).
Error-prevention scheme with two pairs of qubits
Chu, Shih-I; Yang, Chui-Ping; Han, Siyuan
2002-09-04
Ei jue ie j&5ue je i& , e iP$0,1% @6#!. The expressions for HS and HSB are as follows: HS5e0~s I z 1s II z !, *Email address: cpyang@floquet.chem.ku.edu †Email address: sichu@ku.edu ‡ Email address: han@ku.eduError-prevention scheme Chui-Ping Yang.... The sche two pairs of qubits and through error-prevention proc through a decoherence-free subspace for collective p pairs; leakage out of the encoding space due to amp addition, how to construct decoherence-free states for n discussed. DOI: 10.1103/Phys...
Estimating market power in homogeneous product markets using a composed error model
Orea, Luis; Steinbuks, Jevgenijs
2012-04-25
for assisting us with computation of residual demand elasticities based on PX bidding data. We also thank David Newbery, Jacob LaRiviere, Mar Reguant, the anonymous reviewer, and the participants of the 3rd International Workshop on Empirical Methods in Energy... that variation in the error term is an exponential function of an intercept term, the day-ahead forecast of total demand and its square (i.e., FQ, FQ2), that are included in the model in order to capture possible demand-size effects, and a vector of days...
Contributions to Human Errors and Breaches in National Security Applications.
Pond, D. J.; Houghton, F. K.; Gilmore, W. E.
2002-01-01
Los Alamos National Laboratory has recognized that security infractions are often the consequence of various types of human errors (e.g., mistakes, lapses, slips) and/or breaches (i.e., deliberate deviations from policies or required procedures with no intention to bring about an adverse security consequence) and therefore has established an error reduction program based in part on the techniques used to mitigate hazard and accident potentials. One cornerstone of this program, definition of the situational and personal factors that increase the likelihood of employee errors and breaches, is detailed here. This information can be used retrospectively (as in accident investigations) to support and guide inquiries into security incidents or prospectively (as in hazard assessments) to guide efforts to reduce the likelihood of error/incident occurrence. Both approaches provide the foundation for targeted interventions to reduce the influence of these factors and for the formation of subsequent 'lessons learned.' Overall security is enhanced not only by reducing the inadvertent releases of classified information but also by reducing the security and safeguards resources devoted to them, thereby allowing these resources to be concentrated on acts of malevolence.
Backward Error and Condition of Polynomial Eigenvalue Problems \\Lambda
Higham, Nicholas J.
, 1999 Abstract We develop normwise backward errors and condition numbers for the polynoÂ mial eigenvalue Research Council grant GR/L76532. 1 #12; where A l 2 C n\\Thetan , l = 0: m and we refer to P as a â??Âmatrix. Few direct numerical methods are available for solving the polynomial eigenvalue problem (PEP). When m
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Sambridge, Malcolm
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library
Rate Regions for Coherent and Noncoherent Multisource Network Error Correction
Ho, Tracey
,tho,effros}@caltech.edu Joerg Kliewer New Mexico State University Email: jkliewer@nmsu.edu Elona Erez Yale University Email a single error on a network link may lead to a corruption of many received packets at the destination nodes
Optimal Estimation from Relative Measurements: Error Scaling (Extended Abstract)
Hespanha, João Pedro
"relative" measurement between xu and xv is available: uv = xu - xv + u,v Rk , (u, v) E V × V, (1) whereOptimal Estimation from Relative Measurements: Error Scaling (Extended Abstract) Prabir Barooah Jo~ao P. Hespanha I. ESTIMATION FROM RELATIVE MEASUREMENTS We consider the problem of estimating a number
Low Degree Test with Polynomially Small Error Dana Moshkovitz
Moshkovitz, Dana
Low Degree Test with Polynomially Small Error Dana Moshkovitz October 19, 2014 Abstract A long line of work in Theoretical Computer Science shows that a function is close to a low degree polynomial iff it is close to a low degree polynomial locally. This is known as low degree testing
Time reversal in thermoacoustic tomography - an error estimate
Hristova, Yulia
2008-01-01
The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.
Error Control Based Model Reduction for Parameter Optimization of Elliptic
of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Oldenburg, Douglas W.
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy-UXOs dug per UXO). The discrimination and classification methodology depends on the magnitude of the recov
Improving STT-MRAM Density Through Multibit Error Correction
Sapatnekar, Sachin
. Traditional methods enhance robustness at the cost of area/energy by using larger cell sizes to improve the thermal stability of the MTJ cells. This paper employs multibit error correction with DRAM to the read operation) through TX. A key attribute of an MTJ is the notion of thermal stability. Fig. 2
Designing Automation to Reduce Operator Errors Nancy G. Leveson
Leveson, Nancy
Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of moderelated problems [SW95]. After studying accidents and incidents in the new, highly automated
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F.
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error
Richardson, David
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error Detection and Recovery for Robot Motion, and uncertainty in the geometric * This report describes research done at the Artificial Intelligence Laboratory of the Massach- usetts Institute of Technology. Support for the Laboratory's Artificial Intelligence research
Control del Error para la Multirresoluci on Quincunx a la
Amat, Sergio
multirresoluci#19;on discreta no lineal de Harten. En los algoritmos de multirresoluci#19;on se transforma una obtiene ^ f L la cual debera de estar cerca de #22; f L . Por lo tanto, los algoritmos no deben de ser inestables. En este estudio, introduciremos algoritmos de control del error y de la estabilidad. Se obtendr
Error Bounds from Extra Precise Iterative Refinement James Demmel
Li, Xiaoye Sherry
now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5 Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF-0444486; the DOE Grant Nos. DE-FG03
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2005-08-29
of an error-free condition on temperature in single electron logic processors is derived. The size of the quantum dot of single electron transistor is predicted when a single electron logic processor with the a billion single electron transistors works without...
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2004-01-01
-free condition on temperature in single electron logic processors is derived. The size of the quantum dot of a single electron transistor is predicted when a single electron logic processor with the 10? single electron transistors works without error at room...
Urban Water Demand with Periodic Error Correction David R. Bell
Griffin, Ronald
them. Econometric estimates of residential demand for water abound (Dalhuisen et al. 2003Urban Water Demand with Periodic Error Correction by David R. Bell and Ronald C. Griffin February, Department of Agricultural Economics, Texas A&M University. #12;Abstract Monthly demand for publicly supplied
Errors-in-variables problems in transient electromagnetic mineral exploration
Braslavsky, Julio H.
Errors-in-variables problems in transient electromagnetic mineral exploration K. Lau, J. H in transient electromagnetic mineral exploration. A specific sub-problem of interest in this area geological surveys, dia- mond drilling, and airborne mineral exploration. Our interest here is with ground
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
California at Davis, University of
Error Control of Iterative Linear Solvers for Integrated Groundwater Models by Matthew F. Dixon1 for integrated groundwater models, which are implicitly coupled to another model, such as surface water models in legacy groundwater modeling packages, resulting in the overall simulation speedups as large as 7
Estimating the error distribution function in nonparametric regression
Mueller, Uschi
Schick, Wolfgang Wefelmeyer Summary: We construct an efficient estimator for the error distribution estimator, influence function #12;2 M¨uller - Schick - Wefelmeyer M¨uller, Schick and Wefelmeyer (2004a. We refer also to the introduction of M¨uller, Schick and Wefelmeyer (2004b). Our proof is complicat
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications
Polz, Martin
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications Stelios CSAIL, Cambridge, MA, USA Abstract We present Code Phage (CP), a system for automatically transferring. To the best of our knowledge, CP is the first system to automatically transfer code across multiple
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
Development of an Expert System for Classification of Medical Errors
Kopec, Danny
in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors
The contour method cutting assumption: error minimization and correction
Prime, Michael B; Kastengren, Alan L
2010-01-01
The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.
Selected CRC Polynomials Can Correct Errors and Thus Reduce Retransmission
Mache, Jens
sensor networks, minimizing communication is crucial to improve energy consumption and thus lifetime Correction, Reliability, Network Protocol, Low Power Comsumption I. INTRODUCTION Error detection using Cyclic of retransmitting the whole packet - improves energy consumption and thus lifetime of wireless sensor networks
A Spline Algorithm for Modeling Cutting Errors Turning Centers
Gilsinn, David E.
. Bandy Automated Production Technology Division National Institute of Standards and Technology 100 Bureau are made up of features with profiles defined by arcs and lines. An error model for turned parts must take. In the case where there is a requirement of tangency between two features, such as a line tangent to an arc
Twist-averaged boundary conditions for nuclear pasta Hartree-Fock calculations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Schuetrumpf, B.; Nazarewicz, W.
2015-10-21
Nuclear pasta phases, present in the inner crust of neutron stars, are associated with nucleonic matter at subsaturation densities arranged in regular shapes. Those complex phases, residing in a layer which is approximately 100-m thick, impact many features of neutron stars. Theoretical quantum-mechanical simulations of nuclear pasta are usually carried out in finite threedimensional boxes assuming periodic boundary conditions. The resulting solutions are affected by spurious finite-size effects. To remove spurious finite-size effects, it is convenient to employ twist-averaged boundary conditions (TABC) used in condensed matter, nuclear matter, and lattice quantum chromodynamics applications. In this work, we study the effectivenessmore »of TABC in the context of pasta phase simulations within nuclear density functional theory. We demonstrated that by applying TABC reliable results can be obtained from calculations performed in relatively small volumes. By studying various contributions to the total energy, we gain insights into pasta phases in mid-density range. Future applications will include the TABC extension of the adaptive multiresolution 3D Hartree-Fock solver and Hartree-Fock-Bogoliubov TABC applications to superfluid pasta phases and complex nucleonic topologies as in fission.« less
Gregory Rudnick; Ivo Labbe; Natascha M. Foerster Schreiber; Stijn Wuyts; Marijn Franx; Kristian Finlator; Mariska Kriek; Alan Moorwood; Hans-Walter Rix; Huub Roettgering; Ignacio Trujillo; Arjen van der Wel; Paul van der Werf; Pieter G. van Dokkum
2006-06-21
(Abridged) We present the evolution of the volume averaged properties of the rest-frame optically luminous galaxy population to z~3, determined from four disjoint deep fields with optical to near-infrared wavelength coverage. We select galaxies above a rest-frame V-band luminosity of 3x10^10 Lsol and characterize their rest-frame UV through optical properties via the mean spectral energy distribution (SED). To measure evolution we apply the same selection criteria to a sample of galaxies from the Sloan Digital Sky Survey and COMBO-17. The mean rest-frame 2200Ang through V-band SED becomes steadily bluer with increasing redshift but at zluminous galaxies has increased by a factor of 3.5-7.9 from z=3 to z=0.1, including field-to-field variance uncertainties. After correcting to total, the measured mass densities at z2.3) in our LV selected samples contribute 30% and 64% of the stellar mass budget at z~2 and z~ 2.8 respectively. These galaxies are largely absent from UV surveys and this result highlights the need for mass selection of high redshift galaxies.
Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging
Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.
2006-03-31
One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.
DISTRIBUTED POSE AVERAGING IN CAMERA NETWORKS VIA CONSENSUS ON SE(3) Roberto Tron, Rene Vidal
DISTRIBUTED POSE AVERAGING IN CAMERA NETWORKS VIA CONSENSUS ON SE(3) Roberto Tron, Ren´e Vidal distributed algorithms for esti- mating the average pose of an object viewed by a localized network of camera networks; pose estimation; consensus; optimization on manifolds. 1. INTRODUCTION Recent hardware
Pipeline for the Creation of Surface-based Averaged Brain Atlases
Menzel, Randolf - Institut für Biologie
Pipeline for the Creation of Surface-based Averaged Brain Atlases Anja Kuß Hans-Christian Hege from different image modalities and experiments. In this paper we describe a standardized pipeline of individuals. The pipeline consists of the major steps imaging and preprocessing, segmentation, averaging
Measuring second-order time-average pressure B. L. Smith and G. W. Swift
Smith, Barton L.
, 43.25.Zx, 43.25.Gf MFH I. INTRODUCTION In thermoacoustic engines and refrigerators, streaming can , generate both harmonics such as p2,2 and time- averaged phenomena such as streaming and the time- averaged and p1 . The nature and magnitude of p2,0 have generated activity and controversy in the acoustics
A comparison of spatial averaging and Cadzow's method for array wavenumber estimation
Harris, D.B.; Clark, G.A.
1989-10-31
We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.
Cao, Wenwu
Allowed mesoscopic point group symmetries in domain average engineering of perovskite ferroelectric average engineering in proper ferroelectric systems arising from the cubic Pm3¯m symmetry perovskite4 Both solid solution systems have a perovskite structure. Poling along one of the pseudocubic axes
A Structural Analysis of Vehicle Design Responses to Corporate Average Fuel Economy Policy
Michalek, Jeremy J.
09-0588 A Structural Analysis of Vehicle Design Responses to Corporate Average Fuel Economy Policy, Michalek, and Hendrickson 1 ABSTRACT The U.S. Corporate Average Fuel Economy (CAFE) regulations, which aim fuel economy; Energy policy; Oligopolistic market; Mixed logit #12;Shiau, Michalek, and Hendrickson 2 1
Surface-based display of volume-averaged cerebellar imaging data Jrn Diedrichsen & Ewa Zotow
Diedrichsen, Jörn
Surface-based display of volume-averaged cerebellar imaging data Jörn Diedrichsen & Ewa Zotow representation of the cerebellum as a visualization tool for volume-averaged cerebellar data. Volume-based) Data projected onto a surface- based representation based on a single anatomy [2] displays single
ON THE SELF-AVERAGING OF WAVE ENERGY IN RANDOM GUILLAUME BAL
Bal, Guillaume
ON THE SELF-AVERAGING OF WAVE ENERGY IN RANDOM MEDIA GUILLAUME BAL Abstract. We consider the stabilization (self-averaging) and destabilization of the energy of waves propagating in random media transport equations for arbitrary statistical moments of the wave field is used to show that wave energy
Statewide average major timber product prices started the year on a decline except
Statewide average major timber product prices started the year on a decline except for a slight rise in hardwood pulpwood price. Pine sawlog price continued to fall during the January/February 2008 period. State- wide pine sawlog averaged $35.20/ton, the lowest price since January 2006. This was a 5
Timber prices remained sluggish during May/June 2009. Statewide average stump-
Timber prices remained sluggish during May/June 2009. Statewide average stump- age prices of all on hous- ing starts and lumber prices nationally at the end of the period. Statewide pine sawlog prices. The average pine sawlog price was $20.41 per ton for Northeast Texas and $22.60 per ton for Southeast Texas
Reaction-time binning: A simple method for increasing the resolving power of ERP averages
Poli, Riccardo
Reaction-time binning: A simple method for increasing the resolving power of ERP averages RICCARDO-locked, response-locked, and ERP-locked averaging are effective methods for reducing artifacts in ERP analysis. However, they suffer from a magnifying-glass effect: they increase the resolution of specific ERPs
Ordinary kriging for on-demand average wind interpolation of in-situ wind sensor data
Middleton, Stuart E.
1 Ordinary kriging for on-demand average wind interpolation of in-situ wind sensor data Zlatko comes from wind in-situ observation stations in an area approximately 200km by 125km. We provide on-demand average wind interpolation maps. These spatial estimates can then be compared with the results of other
Tradeoffs and Average-Case Equilibria in Selfish Routing Martin Hoefer
Reiterer, Harald
the expected price of anarchy of the game for various social cost functions. For total latency social cost cost in polyno- mial time. Furthermore, our analyses of the expected prices are average-case analyses, 2007 Abstract We consider the price of selfish routing in terms of tradeoffs and from an average
A spatiotemporal auto-regressive moving average model for solar radiation
Glasbey, Chris
A spatiotemporal auto-regressive moving average model for solar radiation C.A. Glasbey and D 1). Solar radiation, averaged over ten minute intervals, was recorded at each site for two years otherwise there are too many parameters to be estimated. As we wish to simulate solar radiation on a network
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require (p) analysis time per MPI operation,more »forpprocesses. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)
2010-08-24
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
On the efficiency of nondegenerate quantum error correction codes for Pauli channels
Gunnar Bjork; Jonas Almlof; Isabel Sainz
2009-05-19
We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)
2014-06-01
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the use of probabilistic models into clinical chart checking for improved detection of potential errors in RT plans.
On the Theory of Average Case Complexity Shai Ben-Davidy
Goldreich, Oded
appeared in Journal of Computer and system Sciences, Vol. 44, No. 2, April 1992, pp. 193{219. I've corrected some errors which I found while scanning, but did not proofread this version. O.G., 1997 Science Foundation (BSF), Jerusalem, Israel. xPartially supported by a Natural Sciences and Engineering
Ford, D.P.; Schwartz, B.S.; Powell, S.; Nelson, T.; Keller, L.; Sides, S.; Agnew, J.; Bolla, K.; Bleecker, M. )
1991-06-01
Previous reports have attributed a range of neurobehavioral effects to low-level, occupational solvent exposure. These studies have generally been limited in their exposure assessments and have specifically lacked good estimates of exposure intensity. In the present study, the authors describe the development of two exposure variables that quantitatively integrate industrial hygiene sampling data with estimates of exposure duration--a cumulative exposure (CE) estimate and a lifetime weighted average exposure (LWAE) estimate. Detailed occupational histories were obtained from 187 workers at two paint manufacturing plants. Historic industrial hygiene sampling data for total hydrocarbons (a composite variable of the major neurotoxic solvents present) were grouped according to 20 uniform, temporally stable exposure zones, which had been defined during plant walk-through surveys. Sampling at the time of the study was used to characterize the few zones for which historic data were limited or unavailable. For each participant, the geometric mean total hydrocarbon level for each exposure zone worked in was multiplied by the duration of employment in that zone; the resulting products were summed over the working lifetime to create the CE variable. The CE variable was divided by the total duration of employment in solvent-exposed jobs to create the LWAE variable. The explanatory value of each participant's LWAE estimate in the regression of simple visual reaction time (a neurobehavioral test previously shown to be affected by chronic solvent exposure) on exposure was compared with that of several other exposure variables, including exposure duration and an exposure variable based on an ordinal ranking of the exposure zones.
Fact #870: April 27, 2015 Corporate Average Fuel Economy Progress, 1978-2014
Broader source: Energy.gov [DOE]
The Corporate Average Fuel Economy (CAFE) is the sales-weighted harmonic mean fuel economy of a manufacturer’s fleet of new cars or light trucks in a certain model year (MY). First enacted by...
Giant aeolian dune size determined by the average depth of the atmospheric boundary layer
Tlemcen, Algeria. 3 Nicholas School of the Environment and Earth Sciences, Center for Nonlinear be related to statistically averaged quantities. The detailed modelling of the atmospheric processes is very
Fact #693: September 19, 2011 Average Vehicle Footprint for Cars and Light Trucks
Broader source: Energy.gov [DOE]
A vehicle footprint is the area defined by the four points where the tires touch the ground. It is calculated as the product of the wheelbase and the average track width of the vehicle. The...
System average rates of U.S. investor-owned electric utilities : a statistical benchmark study
Berndt, Ernst R.
1995-01-01
Using multiple regression methods, we have undertaken a statistical "benchmark" study comparing system average electricity rates charged by three California utilities with 96 other US utilities over the 1984-93 time period. ...
AVERAGES ALONG POLYNOMIAL SEQUENCES IN DISCRETE NILPOTENT GROUPS: SINGULAR RADON TRANSFORMS
Magyar, Akos
AVERAGES ALONG POLYNOMIAL SEQUENCES IN DISCRETE NILPOTENT GROUPS: SINGULAR RADON TRANSFORMS can consider discrete maximal Radon transforms, which have applications to pointwise ergodic theo- rems, and discrete singular Radon transforms. In this paper we prove L2 boundedness of discrete
Fact #624: May 24, 2010 Corporate Average Fuel Economy Standards, Model Years 2012-2016
Broader source: Energy.gov [DOE]
The final rule for the Corporate Average Fuel Economy (CAFE) Standards was published in March 2010. Under this rule, each light vehicle model produced for sale in the United States will have a fuel...
Fact #728: May 21, 2012 Average Trip Length is Less Than Ten Miles
Broader source: Energy.gov [DOE]
The average trip length (one-way) is 9.7 miles according to the 2009 Nationwide Personal Transportation Survey. Trip lengths vary by the purpose of the trip. Shopping and family/personal business...
Advancing the Theoretical Foundation of the Partially-averaged Navier-Stokes Approach
Reyes, Dasia Ann
2013-05-06
computational technologies. Low-fidelity approaches such as Reynolds-averaged Navier-Stokes (RANS), although widely used, are inherently inadequate for turbulent flows with complex flow features. VR bridging methods fill the gap between DNS and RANS by allowing...
Variation in the annual average radon concentration measured in homes in Mesa County, Colorado
Rood, A.S.; George, J.L.; Langner, G.H. Jr.
1990-04-01
The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs.
Experiments with a time-dependent, zonally averaged, seasonal, enery balance climatic model
Thompson, Starley Lee
1977-01-01
EXPERIMENTS WITH A TI&E-DEPENDENT, ZONALLY AVERAGED, SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Submitted to the Graduate College of Texas ASM University in partial fulfillment of the requirement for the decree... of MASTER OF SCIENCE December 1977 Major Subject: Meteorology EXPERIMENTS WITH A TIME DEPENDENT~ ZONALLY AVERAGED~ SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Approved as to style and content by: (Chairman of Committee...
HEALTH POLICY AND SYSTEMS Nurses' Practice Environments, Error Interception Practices,
Xie, Minge
7,000 inpatient deaths per year in the United States (US). On average, a U.S. hospital patient of Nursing, Rutgers the State University of New Jersey, Newark, NJ 2 Associate Professor, University, Rutgers College of Nursing, Rutgers the State University of New Jersey, Newark, NJ 4 Professor
Nuclear Arms Control R&D Consortium includes Los Alamos
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nuclear Arms Control R&D Consortium includes Los Alamos Nuclear Arms Control R&D Consortium includes Los Alamos A consortium led by the University of Michigan that includes LANL as...
Cappelli, M.; Gadomski, A. M.; Sepiellis, M.; Wronikowska, M. W.
2012-07-01
In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safety Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)
Newport News in Review, ch. 47, segment includes TEDF groundbreaking...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
https:www.jlab.orgnewsarticlesnewport-news-review-ch-47-segment-includes-tedf-groundbreaking-event Newport News in Review, ch. 47, segment includes TEDF groundbreaking event...
Solar Energy Education. Reader, Part II. Sun story. [Includes...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Part II. Sun story. Includes glossary Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. Includes glossary You are...
Microfluidic devices and methods including porous polymer monoliths...
Office of Scientific and Technical Information (OSTI)
devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic...
A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM
Nochetto, Ricardo H.
A CHARACTERISTIC GALERKIN METHOD WITH ADAPTIVE ERROR CONTROL FOR THE CONTINUOUS CASTING PROBLEM casting problem is a convectiondominated nonlinearly degenerate diffusion problem. It is discretized adaptive method. Keywords. a posteriori error estimates, continuous casting, method of characteristics
Simulations of error in quantum adiabatic computations of random 2-SAT instances
Gill, Jay S. (Jay Singh)
2006-01-01
This thesis presents a series of simulations of quantum computations using the adiabatic algorithm. The goal is to explore the effect of error, using a perturbative approach that models 1-local errors to the Hamiltonian ...
Design techniques for graph-based error-correcting codes and their applications
Lan, Ching Fu
2006-04-12
-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...
Revision of the Branch Technical Position on Concentration Averaging and Encapsulation - 12510
Heath, Maurice; Kennedy, James E.; Ridge, Christianne; Lowman, Donald [U.S. NRC, Washington, DC, 20555-0001 (United States); Cochran, John [Sandia National Laboratory (United States)
2012-07-01
The U.S. Nuclear Regulatory Commission (NRC) regulation governing low-level waste (LLW) disposal, 'Licensing Requirements for Land Disposal of Radioactive Waste', 10 CFR Part 61, establishes a waste classification system based on the concentration of specific radionuclides contained in the waste. The regulation also states, at 10 CFR 61.55(a)(8), that, 'the concentration of a radionuclide (in waste) may be averaged over the volume of the waste, or weight of the waste if the units are expressed as nanocuries per gram'. The NRC's Branch Technical Position on Concentration Averaging and Encapsulation provides guidance on averaging radionuclide concentrations in waste under 10 CFR 61.55(a)(8) when classifying waste for disposal. In 2007, the NRC staff proposed to revise the Branch Technical Position on Concentration Averaging and Encapsulation. The Branch Technical Position on Concentration Averaging and Encapsulation is an NRC guidance document for averaging and classifying wastes under 10 CFR 61. The Branch Technical Position on Concentration Averaging and Encapsulation is used by nuclear power plants (NPPs) licensees and sealed source users, among others. In addition, three of the four U.S. LLW disposal facility operators are required to honor the Branch Technical Position on Concentration Averaging and Encapsulation as a licensing condition. In 2010, the Commission directed the staff to develop guidance regarding large scale blending of similar homogenous waste types, as described in SECY-10-0043 as part of its Branch Technical Position on Concentration Averaging and Encapsulation revision. The Commission is improving the regulatory approach used in the Branch Technical Position on Concentration Averaging and Encapsulation by moving towards a making it more risk-informed and performance-based approach, which is more consistent with the agency's regulatory policies. Among the improvements to the Branch Technical Position on Concentration Averaging and Encapsulation are more risk-informed limits for the sizes of sealed sources for safe disposal. Using more realistic intruder exposure scenarios, the suggested limits for Class B and C waste disposal of sealed sources, particularly Cs-137 and Co-60, have been increased. These suggested changes, and others in the Branch Technical Position on Concentration Averaging and Encapsulation, if adopted by Agreement States, have the potential to eliminate numerous orphan sources (i.e., sources that currently have no disposal pathway) that are now being stored. Permanent disposal of these sources, rather than temporary storage, will help reduce safety and security risks. The revised Branch Technical Position on Concentration Averaging and Encapsulation has an alternative approach section which provides flexibility to generators and processors, while also ensuring that intruder protection will be maintained. Alternative approaches provide flexibility by allowing for consideration of likelihood of intrusion, the possibility of averaging over larger volumes and allowing for disposal of large activity sources. The revision has improved the organization of the Branch Technical Position on Concentration Averaging and Encapsulation, improved its clarity, better documented the bases for positions, and made the positions more risk informed while also maintaining protection for intruder as required by 10 CFR Part 61. (authors)
The Impact of Soil Sampling Errors on Variable Rate Fertilization
R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.
Error-field penetration in reversed magnetic shear configurations
Wang, H. H.; Wang, Z. X.; Wang, X. Q. [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China)] [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, X. G. [School of Physics, Peking University, Beijing 100871 (China)] [School of Physics, Peking University, Beijing 100871 (China)
2013-06-15
Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.
Error 401 on upload? | OpenEI Community
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on QA:QA J-E-1 SECTION J APPENDIX ECoopButtePowerEdisto Electric Coop,Erosion Flume Jump to: navigation, search BasicError
Plasma parameter scaling of the error-field penetration threshold in tokamaks Richard Fitzpatrick
Fitzpatrick, Richard
Plasma parameter scaling of the error-field penetration threshold in tokamaks Richard Fitzpatrick of a rotating tokamak plasma to a resonant error-field Phys. Plasmas 21, 092513 (2014); 10.1063/1.4896244 A nonideal error-field response model for strongly shaped tokamak plasmas Phys. Plasmas 17, 112502 (2010); 10
Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS
Duckham, Matt
Matt Duckham Page 1 Implementing an object-oriented error sensitive GIS Matt Duckham Department in the handling of uncertainty within GIS, the production of what has been described as an error sensitive GIS of opportunities, but also impediments to the implemen- tation of such an error sensitive GIS. An important barrier
Repeated quantum error correction on a continuously encoded qubit by real-time feedback
Julia Cramer; Norbert Kalb; M. Adriaan Rol; Bas Hensen; Machiel S. Blok; Matthew Markham; Daniel J. Twitchen; Ronald Hanson; Tim H. Taminiau
2015-08-06
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits, so that errors can be detected without affecting the encoded state. To be compatible with universal fault-tolerant computations, it is essential that the states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected qubit using a diamond quantum processor. We encode a logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements using an ancilla electron spin, and apply corrections on the encoded state by real-time feedback. The actively error-corrected qubit is robust against errors and multiple rounds of error correction prevent errors from accumulating. Moreover, by correcting phase errors naturally induced by the environment, we demonstrate that encoded quantum superposition states are preserved beyond the dephasing time of the best physical qubit used in the encoding. These results establish a powerful platform for the fundamental investigation of error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning Robert Granat, Kiri-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors for detecting and recovering from such errors. A common hardware technique for achieving radiation protection
Edit: Study -APP Save | Exit | Hide/Show Errors | Print... | Jump To
Biederman, Irving
Edit: Study - APP Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project Guidance Save | Exit | Hide/Show Errors | Print... | Jump To: 01. Project IdentificationStarDev/ResourceAdministration/Project/ProjectEditor?Project=com... 1 #12;Edit: Study - APP- Save | Exit | Hide/Show Errors | Print... | Jump To: 02. Study
Error Correction on a Tree: An Instanton Approach V. Chernyak,1
Stepanov, Misha
or semianalytical estimating of the post-error correction bit error rate (BER) when a forward-error correction 630090, Russia 5 Department of Electrical Engineering, University of Arizona, Tucson, Arizona 85721, USA is utilized for transmitting information through a noisy channel. The generic method that applies to a variety
Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences
Dominici, Francesca
1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure of air pollution and health. Because measurement error may have substantial implications for interpreting
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
Aperiodic dynamical decoupling sequences in presence of pulse errors
Zhi-Hui Wang; V. V. Dobrovitski
2011-01-12
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstrate that by applying the control pulses along different directions, the performance of QDD protocols can be noticeably improved, and explain the reason of such an improvement. Our results can be useful for future implementations of the aperiodic decoupling protocols, and for better understanding of the impact of errors on quantum control of spins.
In-Line-Test of Variability and Bit-Error-Rate of HfOx-Based Resistive Memory
Ji, B L; Ye, Q; Gausepohl, S; Deora, S; Veksler, D; Vivekanand, S; Chong, H; Stamper, H; Burroughs, T; Johnson, C; Smalley, M; Bennett, S; Kaushik, V; Piccirillo, J; Rodgers, M; Passaro, M; Liehr, M
2015-01-01
Spatial and temporal variability of HfOx-based resistive random access memory (RRAM) are investigated for manufacturing and product designs. Manufacturing variability is characterized at different levels including lots, wafers, and chips. Bit-error-rate (BER) is proposed as a holistic parameter for the write cycle resistance statistics. Using the electrical in-line-test cycle data, a method is developed to derive BERs as functions of the design margin, to provide guidance for technology evaluation and product design. The proposed BER calculation can also be used in the off-line bench test and build-in-self-test (BIST) for adaptive error correction and for the other types of random access memories.
Gupta, Tejpal; Jalali, Rakesh; Goswami, Savita; Nair, Vimoj; Moiyadi, Aliasgar; Epari, Sridhar; Sarin, Rajiv
2012-08-01
Purpose: To report on acute toxicity, longitudinal cognitive function, and early clinical outcomes in children with average-risk medulloblastoma. Methods and Materials: Twenty children {>=}5 years of age classified as having average-risk medulloblastoma were accrued on a prospective protocol of hyperfractionated radiation therapy (HFRT) alone. Radiotherapy was delivered with two daily fractions (1 Gy/fraction, 6 to 8 hours apart, 5 days/week), initially to the neuraxis (36 Gy/36 fractions), followed by conformal tumor bed boost (32 Gy/32 fractions) for a total tumor bed dose of 68 Gy/68 fractions over 6 to 7 weeks. Cognitive function was prospectively assessed longitudinally (pretreatment and at specified posttreatment follow-up visits) with the Wechsler Intelligence Scale for Children to give verbal quotient, performance quotient, and full-scale intelligence quotient (FSIQ). Results: The median age of the study cohort was 8 years (range, 5-14 years), representing a slightly older cohort. Acute hematologic toxicity was mild and self-limiting. Eight (40%) children had subnormal intelligence (FSIQ <85), including 3 (15%) with mild mental retardation (FSIQ 56-70) even before radiotherapy. Cognitive functioning for all tested domains was preserved in children evaluable at 3 months, 1 year, and 2 years after completion of HFRT, with no significant decline over time. Age at diagnosis or baseline FSIQ did not have a significant impact on longitudinal cognitive function. At a median follow-up time of 33 months (range, 16-58 months), 3 patients had died (2 of relapse and 1 of accidental burns), resulting in 3-year relapse-free survival and overall survival of 83.5% and 83.2%, respectively. Conclusion: HFRT without upfront chemotherapy has an acceptable acute toxicity profile, without an unduly increased risk of relapse, with preserved cognitive functioning in children with average-risk medulloblastoma.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore »a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.
Average M shell fluorescence yields for elements with 70?Z?92
Kahoul, A.; Deghfel, B.; Aylikci, V.; Aylikci, N. K.; Nekkab, M.
2015-03-30
The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (?{sup ¯}{sub M}) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70?Z?92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement was typically obtained between our result and other works.
Reconstruction of ionization probabilities from spatially averaged data in N dimensions
Strohaber, J.; Kolomenskii, A. A.; Schuessler, H. A.
2010-07-15
We present an analytical inversion technique, which can be used to recover ionization probabilities from spatially averaged data in an N-dimensional detection scheme. The solution is given as a power series in intensity. For this reason, we call this technique a multiphoton expansion (MPE). The MPE formalism was verified with an exactly solvable inversion problem in two dimensions, and probabilities in the postsaturation region, where the intensity-selective scanning approach breaks down, were recovered. In three dimensions, ionization probabilities of Xe were successfully recovered with MPE from simulated (using the Ammosov-Delone-Krainov tunneling theory) ion yields. Finally, we tested our approach with intensity-resolved benzene-ion yields, which show a resonant multiphoton ionization process. By applying MPE to this data (which were artificially averaged), the resonant structure was recovered, which suggests that the resonance in benzene may have been observed in spatially averaged data taken elsewhere.
Mathur, Anuj
1994-01-01
In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...
Fossen, Haakon
Errors, 3rd printing ·Page 3, Fig 1.2 has an error in the stratigraphic key: "Tertiary" should "-amplitude" to "-wavelength". ·Page 231, 6th and 3rd last lines of the page: Add "Figure" in front of 19.5a ..." and 3rd line: "three principal axes" (not two). #12;
Cropper, Clark; Perfect, Edmund; van den Berg, Dr. Elmer; Mayes, Melanie
2010-01-01
The capillary pressure-saturation function can be determined from centrifuge drainage experiments. In soil physics, the data resulting from such experiments are usually analyzed by the 'averaging method.' In this approach, average relative saturation, , is expressed as a function of average capillary pressure, <{psi}>, i.e., (<{psi}>). In contrast, the capillary pressure-saturation function at a physical point, i.e., S({psi}), has been extracted from similar experiments in petrophysics using the 'integral method.' The purpose of this study was to introduce the integral method applied to centrifuge experiments to a soil physics audience and to compare S({psi}) and (<{psi}>) functions, as parameterized by the Brooks-Corey and van Genuchten equations, for 18 samples drawn from a range of porous media (i.e., Berea sandstone, glass beads, and Hanford sediments). Steady-state centrifuge experiments were performed on preconsolidated samples with a URC-628 Ultra-Rock Core centrifuge. The angular velocity and outflow data sets were then analyzed using both the averaging and integral methods. The results show that the averaging method smoothes out the drainage process, yielding less steep capillary pressure-saturation functions relative to the corresponding point-based curves. Maximum deviations in saturation between the two methods ranged from 0.08 to 0.28 and generally occurred at low suctions. These discrepancies can lead to inaccurate predictions of other hydraulic properties such as the relative permeability function. Therefore, we strongly recommend use of the integral method instead of the averaging method when determining the capillary pressure-saturation function by steady-state centrifugation. This method can be successfully implemented using either the van Genuchten or Brooks-Corey functions, although the latter provides a more physically precise description of air entry at a physical point.
Note on an integral expression for the average lifetime of the bound state in 2D
Thorsten Prustel; Martin Meier-Schellersheim
2012-10-04
Recently, an exact Green's function of the diffusion equation for a pair of spherical interacting particles in two dimensions subject to a backreaction boundary condition was used to derive an exact expression for the average lifetime of the bound state. Here, we show that the corresponding divergent integral may be considered as the formal limit of a Stieltjes transform. Upon analytically calculating the Stieltjes transform one can obtain an exact expression for the finite part of the divergent integral and hence for the average lifetime.
Title IX & Discrimination Complaint Form (including sexual harassment)
Walker, Lawrence R.
Title IX & Discrimination Complaint Form (including sexual harassment) Office of Diversity. Although the university cannot commit to keeping a complaint of discrimination confidential the process for filing or investigating complaints of discrimination (including sexual harassment). Note
Explosion at Louisa (including Morrison Old) Colliery, Durham
Yates, R.
MINISTRY OF FUEL AND POWER - EXPLOSION AT LOUISA (including MORRISON OLD) COLLIERY, DURHAM REPORT On the Causes of, and Circumstances attending, the Explosion which occurred at Louisa (including Morrison Old) Colliery, ...
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.
2013-03-19
Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.
Assessing the U.S. Senate Vote on the Corporate Average Fuel Economy (CAFE) Standard
Preston, Scott
classify cars as light trucks to "bend" the restrictions set by the standard. (Vehicles classified as light reclassified as a light truck, Subaru was able to add weight to the vehicle without making expenditures Kerry proposed raising the Corporate Average Fuel Economy (CAFE) standard for cars and trucks. On March
Modeling tidal flow in the Great Bay Estuary, New Hampshire, using a depth averaged
Modeling tidal flow in the Great Bay Estuary, New Hampshire, using a depth averaged flooding, University of New Hampshire, USA. 2 Numerical Methods Lab., Dartmouth College, USA. 3 Ocean Process Analysis Lab., University of New Hampshire, USA. Abstract Current, sea level and bed load transport
High Average Power Operation of a Scraper-Outcoupled Free-Electron Laser
Michelle D. Shinn; Chris Behre; Stephen Vincent Benson; Michael Bevins; Don Bullard; James Coleman; L. Dillon-Townes; Tom Elliott; Joe Gubeli; David Hardy; Kevin Jordan; Ronald Lassiter; George Neil; Shukui Zhang
2004-08-01
We describe the design, construction, and operation of a high average power free-electron laser using scraper outcoupling. Using the FEL in this all-reflective configuration, we achieved approximately 2 kW of stable output at 10 um. Measurements of gain, loss, and output mode will be compared with our models.
Average-case analysis of perfect sorting by reversals Mathilde Bouvel
Boyer, Edmond
genomics, is the process of sorting a signed permutation to either the identity or to the reversed identity example here: we perform an average case analysis of a sorting algorithm from computational genomics by generating function analysis of a family of trees. Motivation: a computational genomics problem
POLYMER END-GROUP ANALYSIS: THE DETERMINATION OF AVERAGE MOLECULAR WEIGHT
Weston, Ken
POLYMER END-GROUP ANALYSIS: THE DETERMINATION OF AVERAGE MOLECULAR WEIGHT Background reading. 11. Skoog, West, Holler and Crouch, 7th ed., Chap. 14. Introduction Polymers Polymers are a special in this experiment, or may be of different types. Polymers are very important in biological systems. For example
Climate Projections Using Bayesian Model Averaging and Space-Time Dependence
Haran, Murali
Climate Projections Using Bayesian Model Averaging and Space-Time Dependence K. Sham Bhat, Murali Haran, Adam Terando, and Klaus Keller. Abstract Projections of future climatic changes are a key input to the design of climate change mitiga- tion and adaptation strategies. Current climate change projections
C. K. Sinclair; P. A. Adderley; B. M. Dunham; J. C. Hansknecht; P. Hartmann; M. Poelker; J. S. Price; P. M. Rutt; W. J. Schneider; M. Steigerwald
2007-02-01
Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory) require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2?105???C/cm2 and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.
Under consideration for publication in J. Fluid Mech. 1 Averaging method for nonlinear laminar
Lautrup, Benny
Under consideration for publication in J. Fluid Mech. 1 Averaging method for nonlinear laminar Copenhagen Ã?, Denmark (Received October 10, 2002) We study laminar Ekman boundary layers in rotating systems method to describe laminar and turbulent boundary layers in rotating fluids. They used a self
Power dissipation and time-averaged pressure in oscillating flow through a sudden area change
Smith, Barton L.
that abrupt changes in geometry are ubiquitous in Stirling engines, thermoacoustics, and res- piratory flows Barton L. Smith Mechanical and Aerospace Engineering Department, Utah State University, Logan, Utah 84322-averaged pressure gradient has been used to counteract streaming flows in a thermoacoustic Stirling refrigerator1
Self-guided enhanced sampling methods for thermodynamic averages Ioan Andricioaeia)
Dinner, Aaron
such systems have energetic and entropic barriers that are higher than the thermal energy at tempera- turesSelf-guided enhanced sampling methods for thermodynamic averages Ioan Andricioaeia) Department 2002; accepted 22 October 2002 In the self-guided molecular dynamics SGMD simulation method
Averaging out Inhomogeneous Newtonian Cosmologies: I. Fluid Mechanics and the Navier-Stokes Equation
Roustam Zalaletdinov
2002-12-18
The basic concepts and equations of classical fluid mechanics are presented in the form necessary for the formulation of Newtonian cosmology and for derivation and analysis of a system of the averaged Navier-Stokes-Poisson equations. A special attention is paid to the analytic formulation of the definitions and equations of moving fluids and to their physical content.
Micro-engineered first wall tungsten armor for high average power laser fusion energy systems
Ghoniem, Nasr M.
Micro-engineered first wall tungsten armor for high average power laser fusion energy systems is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first is a coordinated effort to develop laser inertial fusion energy [1]. The first stage of the HAPL program
Bias Correction and Bayesian Model Averaging for Ensemble Forecasts of Surface Wind Direction
Raftery, Adrian
Bias Correction and Bayesian Model Averaging for Ensemble Forecasts of Surface Wind Direction LE proposes an effective bias correction technique for wind direction forecasts from numerical weather forecasts. These techniques are applied to 48-h forecasts of surface wind direction over the Pacific
Real-valued average consensus over noisy quantized channels Andrea Censi Richard M. Murray
Murray, Richard M.
mechanism which can be interpreted as a self-inhibitory action. The result is that the average of the nodes of the graph and can be proved by employing elementary techniques of LTI systems analysis. I. INTRODUCTION. Yet we do not have, in our control-systems toolbox, design methods that can work on this computational
Averages along polynomial sequences in discrete nilpotent groups: singular Radon transforms
Ionescu, Alexandru D; Wainger, Stephen
2012-01-01
We consider a class of operators defined by taking averages along polynomial sequences in discrete nilpotent groups. In this paper we prove $L^2$ boundedness of discrete singular Radon transforms along general polynomial sequences in discrete nilpotent groups of step 2.
Seasonal Variation in Monthly Average Air Change Rates Using Passive Tracer Gas Measurements
Hansen, René Rydhof
of indoor air pollution sources. Concurrently, great efforts are made to make buildings energy efficient 1970s, while less attention has been paid to IAQ. Insufficient venting of indoor air pollutantsSeasonal Variation in Monthly Average Air Change Rates Using Passive Tracer Gas Measurements Marie
Efficient computation of robust average in wireless sensor networks using compressive sensing
New South Wales, University of
compressive sensing. Instead of sending a block of sensor readings to the data fusion centre, each sensor of the projections (which we will refer to as the compressed data) to the data fusion centre. At the data fusion of the robust average of the original sensor readings. This means that the data fusion centre will only need
Enhanced interleaved partitioning PTS for peak-to-average power ratio reduction in
-PTS is proposed that can be used to produce fully independent candidates so that IP-PTS can achieve similar perforEnhanced interleaved partitioning PTS for peak-to-average power ratio reduction in OFDM systems G. Lu, P. Wu and C. Carlemalm-Logothetis The independence of the candidates generated in the existing
Widen, Joakim; Waeckelgaard, Ewa; Paatero, Jukka; Lund, Peter
2010-03-15
The trend of increasing application of distributed generation with solar photovoltaics (PV-DG) suggests that a widespread integration in existing low-voltage (LV) grids is possible in the future. With massive integration in LV grids, a major concern is the possible negative impacts of excess power injection from on-site generation. For power-flow simulations of such grid impacts, an important consideration is the time resolution of demand and generation data. This paper investigates the impact of time averaging on high-resolution data series of domestic electricity demand and PV-DG output and on voltages in a simulated LV grid. Effects of 10-minutely and hourly averaging on descriptive statistics and duration curves were determined. Although time averaging has a considerable impact on statistical properties of the demand in individual households, the impact is smaller on aggregate demand, already smoothed from random coincidence, and on PV-DG output. Consequently, the statistical distribution of simulated grid voltages was also robust against time averaging. The overall judgement is that statistical investigation of voltage variations in the presence of PV-DG does not require higher resolution than hourly. (author)
Wefelmeyer, Wolfgang
average processes By Anton Schick1 and Wolfgang Wefelmeyer Binghamton University and University of CologneSupported in part by NSF Grant DMS 0072174 1 #12;2 ANTON SCHICK AND WOLFGANG WEFELMEYER first-order moving of functions ui(X1) + · · · + um(Xm) at a point. Schick and Wefelmeyer (2004b) obtain functional central limit
Estimation of Average Switching Activity in Combinational Logic Circuits Using Symbolic Simulation
Devadas, Srinivas
held mobile telephones, lowpower dissipation may be the tightest con straint in the design. More generally of power estimation methods the reader is referred to [?]. Our work on switching activity estimationEstimation of Average Switching Activity in Combinational Logic Circuits Using Symbolic Simulation
Averaged dynamics of two-phase media in a vibration field Arthur V. Straubea
Straube, Arthur V.
to astronomic scales. Vibration is a mechanical oscillatory process with an amplitude that is small compared of the system is much larger than the period of the oscillation. Vibration mechanics has been studied for a longAveraged dynamics of two-phase media in a vibration field Arthur V. Straubea Department of Physics
Seminario de Estadstica e Investigacin Operativa "Tree, web and average web value for
Tradacete, Pedro
Seminario de Estadística e Investigación Operativa "Tree, web and average web value for cycle solution concepts, called web values, are introduced axiomatically, each one with respect to some specific recursive algorithms to calculate them. Additionally the efficiency and stability of web values are studied
Asymptotic scaling corrections in QCD with Wilson fermions from the 3-loop average plaquette
B. Alles; A. Feo; H. Panagopoulos
1998-01-23
We calculate the 3-loop perturbative expansion of the average plaquette in lattice QCD with N_f massive Wilson fermions and gauge group SU(N). The corrections to asymptotic scaling in the corresponding energy scheme are also evaluated. We have also improved the accuracy of the already known pure gluonic results at 2 and 3 loops.
Effects of nuclear structure on average angular momentum in subbarrier fusion
A. B. Balantekin; J. R. Bennett; S. Kuyucak
1994-07-21
We investigate the effects of nuclear quadrupole and hexadecapole couplings on the average angular momentum in sub-barrier fusion reactions. This quantity could provide a probe for nuclear shapes, distinguishing between prolate vs. oblate quadrupole and positive vs. negative hexadecapole couplings. We describe the data in the O + Sm system and discuss heavier systems where shape effects become more pronounced.
Dellamonica, D.; Luo, G.; Ding, G.
2014-06-01
Purpose: Setup errors on the order of millimeters may cause under-dosing of targets and significant changes in dose to critical structures especially when planning with tight margins in stereotactic radiosurgery. This study evaluates the effects of these types of patient positioning uncertainties on planning target volume (PTV) coverage and cochlear dose for stereotactic treatments of acoustic neuromas. Methods: Twelve acoustic neuroma patient treatment plans were retrospectively evaluated in Brainlab iPlan RT Dose 4.1.3. All treatment beams were shaped by HDMLC from a Varian TX machine. Seven patients had planning margins of 2mm, five had 1–1.5mm. Six treatment plans were created for each patient simulating a 1mm setup error in six possible directions: anterior-posterior, lateral, and superiorinferior. The arcs and HDMLC shapes were kept the same for each plan. Change in PTV coverage and mean dose to the cochlea was evaluated for each plan. Results: The average change in PTV coverage for the 72 simulated plans was ?1.7% (range: ?5 to +1.1%). The largest average change in coverage was observed for shifts in the patient's superior direction (?2.9%). The change in mean cochlear dose was highly dependent upon the direction of the shift. Shifts in the anterior and superior direction resulted in an average increase in dose of 13.5 and 3.8%, respectively, while shifts in the posterior and inferior direction resulted in an average decrease in dose of 17.9 and 10.2%. The average change in dose to the cochlea was 13.9% (range: 1.4 to 48.6%). No difference was observed based on the size of the planning margin. Conclusion: This study indicates that if the positioning uncertainty is kept within 1mm the setup errors may not result in significant under-dosing of the acoustic neuroma target volumes. However, the change in mean cochlear dose is highly dependent upon the direction of the shift.
Libby, J.; Malde, S.; Powell, A.; Wilkinson, G.; Asner, David M.; Bonvicini, Giovanni; Briere, R. A.; Gershon, T.; Naik, P.; Pedlar, Todd K.; Rademacker, J.; Ricciardi, S.; Thomas, C.
2014-07-14
New determination of the D0!K?!+!0 and D0!K?!+!+!? coherence factors and average strong-phase differences
Time-averaged quantum dynamics and the validity of the effective
Office of Scientific and Technical Information (OSTI)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefieldSulfateSciTechtail.Theory of rare Kaon and Pion decaysArticle) |errors(Technical Report)Hamiltonian
Optical pattern recognition architecture implementing the mean-square error correlation algorithm
Molley, Perry A. (Albuquerque, NM)
1991-01-01
An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.
The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors
Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)
2010-07-15
Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.
Kraan, Aafke C., E-mail: aafke.kraan@pi.infn.it [Erasmus MC Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Water, Steven van de; Teguh, David N.; Al-Mamgani, Abrahim [Erasmus MC Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Madden, Tom; Kooy, Hanne M. [Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Heijmen, Ben J.M.; Hoogeman, Mischa S. [Erasmus MC Daniel den Hoed Cancer Center, Rotterdam (Netherlands)
2013-12-01
Purpose: Setup, range, and anatomical uncertainties influence the dose delivered with intensity modulated proton therapy (IMPT), but clinical quantification of these errors for oropharyngeal cancer is lacking. We quantified these factors and investigated treatment fidelity, that is, robustness, as influenced by adaptive planning and by applying more beam directions. Methods and Materials: We used an in-house treatment planning system with multicriteria optimization of pencil beam energies, directions, and weights to create treatment plans for 3-, 5-, and 7-beam directions for 10 oropharyngeal cancer patients. The dose prescription was a simultaneously integrated boost scheme, prescribing 66 Gy to primary tumor and positive neck levels (clinical target volume-66 Gy; CTV-66 Gy) and 54 Gy to elective neck levels (CTV-54 Gy). Doses were recalculated in 3700 simulations of setup, range, and anatomical uncertainties. Repeat computed tomography (CT) scans were used to evaluate an adaptive planning strategy using nonrigid registration for dose accumulation. Results: For the recalculated 3-beam plans including all treatment uncertainty sources, only 69% (CTV-66 Gy) and 88% (CTV-54 Gy) of the simulations had a dose received by 98% of the target volume (D98%) >95% of the prescription dose. Doses to organs at risk (OARs) showed considerable spread around planned values. Causes for major deviations were mixed. Adaptive planning based on repeat imaging positively affected dose delivery accuracy: in the presence of the other errors, percentages of treatments with D98% >95% increased to 96% (CTV-66 Gy) and 100% (CTV-54 Gy). Plans with more beam directions were not more robust. Conclusions: For oropharyngeal cancer patients, treatment uncertainties can result in significant differences between planned and delivered IMPT doses. Given the mixed causes for major deviations, we advise repeat diagnostic CT scans during treatment, recalculation of the dose, and if required, adaptive planning to improve adequate IMPT dose delivery.
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with ?errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some ?+? combination. Results: 20mm{sup 2} errors with intensity altered by ?20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ?50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though may be detected by visual inspection. This work was not funded by Varian Oncology Systems. Some authors have other work partly funded by Varian Oncology Systems.
What To Include In The Whistleblower Complaint? | National Nuclear...
National Nuclear Security Administration (NNSA)
Apply for Our Jobs Our Jobs Working at NNSA Blog Home About Us Our Operations Management and Budget Whistleblower Program What To Include In The Whistleblower Complaint?...
Introduction to Small-Scale Photovoltaic Systems (Including RETScreen...
Introduction to Small-Scale Photovoltaic Systems (Including RETScreen Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale...
Including Retro-Commissioning in Federal Energy Savings Performance...
Broader source: Energy.gov (indexed) [DOE]
More Documents & Publications Including Retro-Commissioning in Federal Energy Savings Performance Contracts Enabling Mass-Scale Financing for Federal Energy, Water, and...
Numerical simulations for low energy nuclear reactions including...
Office of Scientific and Technical Information (OSTI)
Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for...
U-182: Microsoft Windows Includes Some Invalid Certificates
Broader source: Energy.gov [DOE]
The operating system includes some invalid intermediate certificates. The vulnerability is due to the certificate authorities and not the operating system itself.
Contagious error sources would need time travel to prevent quantum computation
Gil Kalai; Greg Kuperberg
2015-05-07
We consider an error model for quantum computing that consists of "contagious quantum germs" that can infect every output qubit when at least one input qubit is infected. Once a germ actively causes error, it continues to cause error indefinitely for every qubit it infects, with arbitrary quantum entanglement and correlation. Although this error model looks much worse than quasi-independent error, we show that it reduces to quasi-independent error with the technique of quantum teleportation. The construction, which was previously described by Knill, is that every quantum circuit can be converted to a mixed circuit with bounded quantum depth. We also consider the restriction of bounded quantum depth from the point of view of quantum complexity classes.
New Hampshire, University of
slice whole wheat toast with 1 Tbsp. peanut bu er 1/2 large banana 2 slices whole wheat toast with 4 tsp 1 small fresh apple, sliced 1 Tbsp. peanut bu er 2 graham crackers 1 Tbsp. peanut bu er *Menu item
Renormalization, averaging, conservation laws and AdS (in)stability
Ben Craps; Oleg Evnin; Joris Vanhoof
2015-01-19
We continue our analytic investigations of non-linear spherically symmetric perturbations around the anti-de Sitter background in gravity-scalar field systems, and focus on conservation laws restricting the (perturbatively) slow drift of energy between the different normal modes due to non-linearities. We discover two conservation laws in addition to the energy conservation previously discussed in relation to AdS instability. A similar set of three conservation laws was previously noted for a self-interacting scalar field in a non-dynamical AdS background, and we highlight the similarities of this system to the fully dynamical case of gravitational instability. The nature of these conservation laws is best understood through an appeal to averaging methods which allow one to derive an effective Lagrangian or Hamiltonian description of the slow energy transfer between the normal modes. The conservation laws in question then follow from explicit symmetries of this averaged effective theory.
Gatling gun: high average polarized current injector for eRHIC
Litvinenko, V.N.
2010-01-01
This idea was originally developed in 2001 for, at that time, an ERL-based (and later recirculating-ring) electron-ion collider at JLab. Naturally the same idea is applicable for any gun requiring current exceeding capability of a single cathode. ERL-based eRHIC is one of such cases. This note related to eRHIC was prepared at Duke University in February 2003. In many case photo-injectors can have a limited average current - it is especially true about polarized photo-guns. It is know that e-RHIC requires average polarized electron current well above currently demonstrated by photo-injectors - hence combining currents from multiple guns is can be useful option for eRHIC.
Orbit-averaged guiding-center Fokker-Planck operator for numerical applications
Decker, J.; Peysson, Y.; Duthoit, F.-X. [IRFM, CEA, F-13108 Saint-Paul-lez-Durance (France); Brizard, A. J. [Department of Chemistry and Physics, Saint Michael's College, Colchester, Vermont 05439 (United States)
2010-11-15
A guiding-center Fokker-Planck operator is derived in a coordinate system that is well suited for the implementation in a numerical code. This differential operator is transformed such that it can commute with the orbit-averaging operation. Thus, in the low-collisionality approximation, a three-dimensional Fokker-Planck evolution equation for the orbit-averaged distribution function in a space of invariants is obtained. This transformation is applied to a collision operator with nonuniform isotropic field particles. Explicit neoclassical collisional transport diffusion and convection coefficients are derived, and analytical expressions are obtained in the thin orbit approximation. To illustrate this formalism and validate our results, the bootstrap current is analytically calculated in the Lorentz limit.
PSERC 97-12 "Thermal Unit Commitment Including
PSERC 97-12 "Thermal Unit Commitment Including Optimal AC Power Flow Constraints" Carlos Murillo-562-3966. #12;Thermal Unit Commitment Including Optimal AC Power Flow Constraints Carlos Murillo S anchez Robert a new algorithm for unit commitment that employs a Lagrange relaxation technique with a new augmentation
Summer Conference Participant Registration Fee: $200 Includes the following
Tullos, Desiree
Summer Conference Participant Registration Fee: $200 Includes the following: Lodging for Wednesday on Wednesday, Thursday, and Friday Brunch on Saturday Summer Conference T-shirt Class materials Congress Only only (although they are encouraged to attend the entire conference). This fee includes the following
Solar Energy Education. Reader, Part II. Sun story. [Includes glossary
Not Available
1981-05-01
Magazine articles which focus on the subject of solar energy are presented. The booklet prepared is the second of a four part series of the Solar Energy Reader. Excerpts from the magazines include the history of solar energy, mythology and tales, and selected poetry on the sun. A glossary of energy related terms is included. (BCS)
Energy Transitions: A Systems Approach Including Marcellus Shale Gas Development
Chen, Tsuhan
Energy Transitions: A Systems Approach Including Marcellus Shale Gas Development A Report Transitions: A Systems Approach Including Marcellus Shale Gas Development Executive Summary In the 21st the Marcellus shale In addition to the specific questions identified for the case of Marcellus shale gas in New
Aperiodic dynamical decoupling sequences in presence of pulse errors
Wang, Zhi-Hui
2011-01-01
Dynamical decoupling (DD) is a promising tool for preserving the quantum states of qubits. However, small imperfections in the control pulses can seriously affect the fidelity of decoupling, and qualitatively change the evolution of the controlled system at long times. Using both analytical and numerical tools, we theoretically investigate the effect of the pulse errors accumulation for two aperiodic DD sequences, the Uhrig's DD UDD) protocol [G. S. Uhrig, Phys. Rev. Lett. {\\bf 98}, 100504 (2007)], and the Quadratic DD (QDD) protocol [J. R. West, B. H. Fong and D. A. Lidar, Phys. Rev. Lett {\\bf 104}, 130501 (2010)]. We consider the implementation of these sequences using the electron spins of phosphorus donors in silicon, where DD sequences are applied to suppress dephasing of the donor spins. The dependence of the decoupling fidelity on different initial states of the spins is the focus of our study. We investigate in detail the initial drop in the DD fidelity, and its long-term saturation. We also demonstra...
Articles which include chevron film cooling holes, and related processes
Bunker, Ronald Scott; Lacy, Benjamin Paul
2014-12-09
An article is described, including an inner surface which can be exposed to a first fluid; an inlet; and an outer surface spaced from the inner surface, which can be exposed to a hotter second fluid. The article further includes at least one row or other pattern of passage holes. Each passage hole includes an inlet bore extending through the substrate from the inlet at the inner surface to a passage hole-exit proximate to the outer surface, with the inlet bore terminating in a chevron outlet adjacent the hole-exit. The chevron outlet includes a pair of wing troughs having a common surface region between them. The common surface region includes a valley which is adjacent the hole-exit; and a plateau adjacent the valley. The article can be an airfoil. Related methods for preparing the passage holes are also described.
Ick! The average person sheds 1.5 lbs of skin
Cantlon, Jessica F.
. Be nice to your head.Be nice to your head.Be nice to your head.Be nice to your head. Fight Frizz: Tame dry; pale, whitish nails could be a sign of anemia (low iron level in the blood). · Ingrown toenails may or her head! Most people lose 50-100 hairs per day. - Hair grows an average of 9 inches per year. - Hair
Coupling of an average-atom model with a collisional-radiative equilibrium model
Faussurier, G. Blancard, C.; Cossé, P.
2014-11-15
We present a method to combine a collisional-radiative equilibrium model and an average-atom model to calculate bound and free electron wavefunctions in hot dense plasmas by taking into account screening. This approach allows us to calculate electrical resistivity and thermal conductivity as well as pressure in non local thermodynamic equilibrium plasmas. Illustrations of the method are presented for dilute titanium plasma.
Near-UV to near-IR disk-averaged Earth's reflectance spectra
S. Hamdani; L. Arnold; C. Foellmi; J. Berthier; D. Briot; P. Francois; P. Riaud; J. Schneider
2005-10-20
We report 320 to 1020nm disk-averaged Earth reflectance spectra obtained from Moon's Earthshine observations with the EMMI spectrograph on the NTT at ESO La Silla (Chile). The spectral signatures of Earth atmosphere and ground vegetation are observed. A vegetation red-edge of up to 9% is observed on Europe and Africa and ~2% upon Pacific Ocean. The spectra also show that Earth is a blue planet when Rayleigh scattering dominates, or totally white when the cloud cover is large.
The Importance of Run-time Error Detection Glenn R. Luecke 1
Luecke, Glenn R.
Iowa State University's High Performance Computing Group, Iowa State University, Ames, Iowa 50011, USA State University's High Performance Computing Group for evaluating run-time error detection capabilities
A Key Recovery Attack on Error Correcting Code Based a Lightweight Security Protocol
International Association for Cryptologic Research (IACR)
become prevalent in various fields. Manufacturing, supply chain management and inventory control are some--Authentication, error correcting coding, lightweight, privacy, RFID, security ! 1 INTRODUCTION RFID technology has
Ulidowski, Irek
Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip. Wire markers are typically attached to the polyethylene acetabular component of the prosthesis so
Choose and choose again: appearance-reality errors, pragmatics and logical ability
Deák, Gedeon O; Enright, Brian
2006-01-01
Development, 62, 753–766. Speer, J.R. (1984). Two practicalolder still make errors (e.g. Speer, 1984), some preschool
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-01-01
Development, 62, 753-766. Speer, J. R. (1984). Two practicalolder still make errors (e.g. , Speer, 1984), some preschool
Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley National Laboratory
George, Jeffrey S.
2008-01-01
Quasi-Monoenergetic Neutron Beam from Deuteron Breakup”, inexperiments of atmospheric neutron effects on deep sub-Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley
Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes
Kanungo, Aparna
2005-11-01
an expression to compute the gate error threshold for nonbinary quantum codes and test this result for different classes of codes, to get codes with best threshold results....
From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring
Lombardi, Marcie L.
2012-03-01
Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at the Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.
On the Asymptotic Analysis of Average Interference Power Generated by a Wireless Sensor Network
Yanikomeroglu, Halim
was supported by Saudi Aramco, Dhahran, Saudi Arabia. GHz at six locations including the New York City [5
Including Retro-Commissioning in Federal Energy Savings Performance...
the cost of the survey. Developing a detailed scope of work and a fixed price for this work is important to eliminate risk to the Agency and the ESCo. Including a detailed scope...
T-603: Mac OS X Includes Some Invalid Comodo Certificates
Broader source: Energy.gov [DOE]
The operating system includes some invalid certificates. The vulnerability is due to the invalid certificates and not the operating system itself. Other browsers, applications, and operating systems are affected.
FINITE ELEMENT ANALYSIS OF STEEL WELDED COVERPLATE INCLUDING COMPOSITE DOUBLERS
Petri, Brad
2008-05-15
With the increasing focus on welded bridge members resulting in crack initiation and propagation, there is a large demand for creative solutions. One of these solutions includes the application of composite doublers over ...
Title 16 USC 818 Public Lands Included in Project - Reservation...
of Lands From Entry Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- StatuteStatute: Title 16 USC 818 Public Lands Included in Project...
Including costs of supply chain risk in strategic sourcing decisions
Jain, Avani
2009-01-01
Cost evaluations do not always include the costs associated with risks when organizations make strategic sourcing decisions. This research was conducted to establish and quantify the impact of risks and risk-related costs ...
Hybrid powertrain system including smooth shifting automated transmission
Beaty, Kevin D.; Nellums, Richard A.
2006-10-24
A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.
Limited Personal Use of Government Office Equipment including Information Technology
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2005-01-07
The Order establishes requirements and assigns responsibilities for employees' limited personal use of Government resources (office equipment and other resources including information technology) within DOE, including NNSA. The Order is required to provide guidance on appropriate and inappropriate uses of Government resources. This Order was certified 04/23/2009 as accurate and continues to be relevant and appropriate for use by the Department. Certified 4-23-09. No cancellation.
Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F. [Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands)
2008-09-15
Purpose: lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. Methods and Materials: 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Results: Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods <0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
Nelson, C.; Mahoney, A.R.
1986-06-01
A significant drop in production efficiency has occurred over time at the Solar One facility at Barstow, California, primarily as a result of the degradation of the Pyromark Series 2500 black paint used as the absorptive coating on the receiver panels. As part of the investigation of the problem, the solar-averaged adsorptance properties of the paint were determined as a function of vitrification temperature, since it is known that a significant amount of the panel surface area at Solar One was vitrified at temperatures below those recommended by the paint manufacturer (540/sup 0/C, 1000/sup 0/F). Painted samples initially vitrified at 230/sup 0/C (450/sup 0/F), 315/sup 0/C (600/sup 0/F), 371/sup 0/C (700/sup 0/F), and 480/sup 0/C (900/sup 0/F) exhibited significantly lower solar-averaged absorptance values (0.02 absorptance units) compared to samples vitrified at 540/sup 0/C (1000/sup 0/F). Thus, Solar One began its service life below optimal levels. After 140 h of thermal aging at 370/sup 0/C (700/sup 0/F) and 540/sup 0/C (1000/sup 0/F), all samples regardless of their initial vitrification temperatures, attained the same solar-averaged absorptance value (..cap alpha../sub s/ = 0.973). Therefore, both the long-term low-temperature vitrification and the short-term high-temperature vitrification can be used to obtain optimal or near-optimal absorptance of solar flux. Futher thermal aging of vitrified samples did not result in paint degradation, clearly indicating that high solar flux is required to produce this phenomenon. The panels at Solar One never achieved optimal absorptance because their exposure to high solar flux negated the effect of long-term low-temperature vitrification during operation. On future central receiver projects, every effort should be made to properly vitrify the Pyromark coating before its exposure to high flux conditions.
Evaluating specific error characteristics of microwave-derived cloud liquid water products
Christopher, Sundar A.
of cloud LWP products globally using concurrent data from visible/ infrared satellite sensors. The approachEvaluating specific error characteristics of microwave-derived cloud liquid water products Thomas J microwave satellite measurements. Using coincident visible/infrared satellite data, errors are isolated
A nonideal error-field response model for strongly shaped tokamak plasmas R. Fitzpatrick
Fitzpatrick, Richard
A nonideal error-field response model for strongly shaped tokamak plasmas R. Fitzpatrick Citation of a rotating tokamak plasma to a resonant error-field Phys. Plasmas 21, 092513 (2014); 10.1063/1.4896244 Kinetic description of rotating Tokamak plasmas with anisotropic temperatures in the collisionless regime
Upper Bounds on ErrorCorrecting RunlengthLimited Block Codes
Ytrehus, Ã?yvind
. Inf. Th. May 1991, pp. 941--945 Abstract --- Upper bounds are derived on the number of codewordsÂlimited codes, errorÂcorrection. This work was supported by the Norwegian Research Council for Science on the size of (d; k)Â constrained, simpleÂerror correcting block codes. There are two directions in which one
Finite Element Approximation of the Acoustic Wave Equation: Error Control and Mesh
Bangerth, Wolfgang
Finite Element Approximation of the Acoustic Wave Equation: Error Control and Mesh Adaptation of the Acoustic Wave Equation: Error Control and Mesh Adaptation Wolfgang Bangerth and Rolf Rannacher1 Institute@iwr.uni-heidelberg.de Abstract We present an approach to solving the acoustic wave equation by adaptive finite el- ement methods
Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow
Shadday, Martin A., Jr.
1997-03-20
The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated.
Low-voltage, low-power, low switching error, class-AB switched current
Serdijn, Wouter A.
Low-voltage, low-power, low switching error, class-AB switched current memory cell C. Sawigun and W into two components by a low-voltage class-AB current splitter and subsequently processes the individual signals by two low switching error class-A memory cells. As a conse- quence, the output current obtained
Using system simulation to model the impact of human error in a maritime system
van Dorp, Johan René
the modeling of human error related accident event sequences in a risk assessment of maritime oil framwork was developed for the Prince William Sound Risk Assessment based on interviews with maritime William Sound; Human error; Maritime accidents; Expert judgement; Risk assessment; Risk management 1
Convergence Analysis of the LMS Algorithm with a General Error Nonlinearity and an IID Input
Al-Naffouri, Tareq Y.
Convergence Analysis of the LMS Algorithm with a General Error Nonlinearity and an IID Input Tareq. of Electrical Eng. Abstract The class of least mean square (LMS) algorithms employing a general error are entirely consis- tent with those of the LMS algorithm and several of its variants. The results also
Al-Naffouri, Tareq Y.
The Optimum Error Nonlinearity in LMS Adaptation with an Independent and Identically Distributed, CA 94305 Dhahran 31261 USA Saudi Arabia Abstract The class of LMS algorithms employing a gen- eral view of error nonlinearities in LMS adaptation. In particular, it subsumes two recently developed
Outage Probability for Free-Space Optical Systems Over Slow Fading Channels With Pointing Errors
Hranilovic, Steve
Outage Probability for Free-Space Optical Systems Over Slow Fading Channels With Pointing Errors, Canada. Email: farid@grads.ece.mcmaster.ca, hranilovic@mcmaster.ca Abstract-- We investigate the outage errors. An expression for the outage probability is derived and we show that optimizing the transmit- ted
Object calculus and the object-oriented analysis and design of an error-sensitive GIS
Duckham, Matt
Object calculus and the object-oriented analysis and design of an error-sensitive GIS MATT DUCKHAM of an error-sensitive GIS Abstract. The use of object-oriented analysis and design (OOAD) in GIS research of the key contemporary issues in GIS. This paper examines the application of one particular OO formalism
State preservation by repetitive error detection in a superconducting quantum circuit J. Kelly,1,
Martinis, John M.
State preservation by repetitive error detection in a superconducting quantum circuit J. Kelly,1 , and superconducting circuits1113 have demonstrated multi-qubit states that are first-order toler- ant to one type of error. Recently, experiments with ion traps and superconducting circuits have shown the simultaneous de
Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion
He, Lei
, power and perfor- mance. Recent logic re-synthesis techniques, such as ROSE [2], IPR [3], IPD [4] and R2Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion Naifeng Jing1 , Ju-Yueh Lee2 the Soft Error Rate (SER) at chip level, and reveal a locality and NP-Hardness of the IPV problem. We
Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion
He, Lei
but with high overhead in area, power and performance. Recent logic re-synthesis techniques, such as ROSE [2Mitigating FPGA Interconnect Soft Errors by In-Place LUT Inversion Naifeng Jing1 , Ju-Yueh Lee2 the Soft Error Rate (SER) at chip level, and reveal a locality and NP-Hardness of the IPV problem. We
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing Systems
Pedram, Massoud
An Energy-Aware Fault Tolerant Scheduling Framework for Soft Error Resilient Cloud Computing has drastically increased their susceptibility to soft errors. At the grand scale of cloud computing outputs or system crash. At the grand scale of cloud computing, this problem can only worsen [2, 3, 4, 5
PII S00167037(99)00204-5 A test for systematic errors in 40
Min, Kyoungwon
dating arise from uncertainties in the 40 K decay constants and K/Ar isotopic data for neutron fluence monitors (standards). The activity data underlying the decay constants used in geochronology since 1977). These studies have shown that system- atic errors outweigh typical analytical errors by at least one order
TYPOGRAPHICAL AND ORTHOGRAPHICAL SPELLING ERROR Kyongho Min*, William H. Wilson*, Yoo-Jin Moon
Wilson, Bill
-Jin Moon *School of Computer Science and Engineering The University of New South Wales Sydney NSW 2052 of spelling errors such as typographical (Damerau, 1964; Pollock and Zamora, 1983), orthographical (Sterling), and orthographical errors in spontaneous writings of children (Sterling, 1983; Mitton, 1987). 1.2. Approaches
A Case for Soft Error Detection and Correction in Computational Chemistry
van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Measurement and Analysis of the Error Characteristics of an In-Building Wireless Network
Steenkiste, Peter
on fiber or electrical connections have excellent error characteris- tics but that wireless networksMeasurement and Analysis of the Error Characteristics of an In-Building Wireless Network David fdavide,prsg@cs.cmu.edu Abstract There is general belief that networks based on wireless technolo- gies
A Non-Stationary Errors-in-Variables Method with Application to Mineral Exploration
Braslavsky, Julio H.
A Non-Stationary Errors-in-Variables Method with Application to Mineral Exploration K. Lau 1 J. H-cancellation in transient electromagnetic mineral exploration. Alternative methods for noise cancellation in these systems for this class of systems is proposed and applied to a problem arising in mineral exploration. An errors
Presenting JECA: A Java Error Correcting Algorithm for the Java Intelligent Tutoring System
Franek, Frantisek
Presenting JECA: A Java Error Correcting Algorithm for the Java Intelligent Tutoring System Edward context involving small Java programs. Furthermore, this paper presents JECA (Java Error Correction is to provide a foundation for the Java Intelligent Tutoring System (JITS) currently being field-tested. Key
A POSTERIORI ERROR ANALYSIS OF THE LINKED INTERPOLATION TECHNIQUE FOR PLATE BENDING PROBLEMS
Lovadina, Carlo
A POSTERIORI ERROR ANALYSIS OF THE LINKED INTERPOLATION TECHNIQUE FOR PLATE BENDING PROBLEMS CARLO Interpolation Tech- nique' to approximate the solution of plate bending problems. We show that the proposed. 1. Introduction. In this paper we present an a posteriori error analysis for the so-called `Linked
Integrated Control-Path Design and Error Recovery in the Synthesis of Digital
Chakrabarty, Krishnendu
11 Integrated Control-Path Design and Error Recovery in the Synthesis of Digital Microfluidic Lab that incorporates control paths and an error- recovery mechanism in the design of a digital microfluidic lab, compared to a baseline chip design, the biochip with a control path can reduce the completion time by 30
Impact of Turbulence Closures and Numerical Errors for the Optimization of Flow Control Devices
Paris-Sud XI, Université de
Impact of Turbulence Closures and Numerical Errors for the Optimization of Flow Control Devices J the use of a Kriging-based global optimization method to determine optimal control parameters conduct an optimization process and measure the impact of numerical and modeling errors on the optimal
ERROR BOUNDS FOR MONOTONE APPROXIMATION SCHEMES FOR HAMILTON-JACOBI-BELLMAN
ERROR BOUNDS FOR MONOTONE APPROXIMATION SCHEMES FOR HAMILTON-JACOBI-BELLMAN EQUATIONS GUY BARLES AND ESPEN R. JAKOBSEN Abstract. We obtain error bounds for monotone approximation schemes of Hamilton-Jacobi, (almost) smooth supersolutions for the Hamilton-Jacobi-Bellman equation. 1. Introduction This paper
AN ADAPTIVE METHOD WITH RIGOROUS ERROR CONTROL FOR THE HAMILTON-JACOBI EQUATIONS.
AN ADAPTIVE METHOD WITH RIGOROUS ERROR CONTROL FOR THE HAMILTON-JACOBI EQUATIONS. PART II: THE TWO adaptive method with rigorous error control for the Hamilton-Jacobi equations. Part II: The two and study an adaptive method for finding approximations to the viscosity solution of Hamilton-Jacobi
PROBABILITY OF ERROR FOR TRAINED UNITARY SPACE-TIME MODULATION OVER A
Swindlehurst, A. Lee
PROBABILITY OF ERROR FOR TRAINED UNITARY SPACE-TIME MODULATION OVER A GAUSS-INNOVATIONS RICIAN probability of error for trained uni- tary space-time modulation over channels with a constant specular trained modulation, assuming that the channel is constant between training periods. All of the above
Characterization of the Impact of Indoor Doppler Errors on Pedestrian Dead Reckoning
Calgary, University of
Characterization of the Impact of Indoor Doppler Errors on Pedestrian Dead Reckoning Valérie, University of Calgary 2500 University Drive NW Calgary, Alberta, Canada, T2N 1N4 Abstract--Indoor pedestrian on a Pedestrian Dead Reckoning (PDR) navigation filter is investigated. Doppler errors are simulated using
IEEE SENSORS JOURNAL, VOL. 3, NO. 5, OCTOBER 2003 595 Active Structural Error Suppression in MEMS
Chen, Zhongping
-run perturbations are presented. Index Terms--Error suppression, microelectromechanical sys- tems (MEMS), rate integrating gyroscopes, smart MEMS. I. INTRODUCTION AS MICROELECTROMECHANICAL systems (MEMS) inertial sensorsIEEE SENSORS JOURNAL, VOL. 3, NO. 5, OCTOBER 2003 595 Active Structural Error Suppression in MEMS
Average Price (Cents/kilowatthour) by State by Provider, 1990-2014
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page| Open Energy Informationmonthly gasoline price to fall toUranium MarketingYear Jan Feb Mar AprYear Jan0064772Average
Compact formulas for bounce/transit averaging in axisymmetric tokamak geometry
Duthoit, F -X; Hahm, T S
2014-01-01
Compact formulas for bounce and transit orbit averaging of the fluctuation-amplitude eikonal factor in axisymmetric tokamak geometry, which is frequently encountered in bounce-gyrokinetic description of microturbulence, are given in terms of the Jacobi elliptic functions and elliptic integrals. These formulas are readily applicable to the calculation of the neoclassical susceptibility in the framework of modern bounce-gyrokinetic theory. In the long-wavelength limit, we recover the expression for the Rosenbluth-Hinton residual zonal flow [Rosenbluth and Hinton, Phys.~Rev.~Lett.~{\\bf 80}, 724 (1998)] accurately.
Properties of a new average power Nd-doped phosphate laser glass
Payne, S.A.; Marshall, C.D.; Bayramian, A.J.; Wilke, G.D.; Hayden, J.S.
1995-03-09
The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime.
Laser properties of an improved average-power Nd-doped phosphate glass
Payne, S.A.; Marshall, C.D.; Bayramian, A.J.
1995-03-15
The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties (expansion, conduction, fracture toughness, and Young`s modulus), and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime.
Average and recommended half-life values for two neutrino double beta decay: upgrade'05
A. S. Barabash
2006-02-17
All existing ``positive'' results on two neutrino double beta decay in different nuclei were analyzed. Using the procedure recommended by the Particle Data Group, weighted average values for half-lives of $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{100}$Mo - $^{100}$Ru ($0^+_1$), $^{116}$Cd, $^{150}$Nd, $^{150}$Nd - $^{150}$Sm ($0^+_1$) and $^{238}$U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of $^{128}$Te, $^{130}$Te and $^{130}$Ba are proposed. We recommend the use of these results as presently the most precise and reliable values for half-lives.
Average and recommended half-life values for two neutrino double beta decay: upgrade-09
A. S. Barabash
2009-08-28
All existing ``positive'' results on two neutrino double beta decay in different nuclei were analyzed. Using the procedure recommended by the Particle Data Group, weighted average values for half-lives of $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{100}$Mo - $^{100}$Ru ($0^+_1$), $^{116}$Cd, $^{130}$Te, $^{150}$Nd, $^{150}$Nd - $^{150}$Sm ($0^+_1$) and $^{238}$U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of $^{128}$Te, $^{130}$Te and $^{130}$Ba are proposed. We recommend the use of these results as presently the most precise and reliable values for half-lives.
Average (RECOMMENDED) Half-Life Values for Two Neutrino Double Beta Decay
A. S. Barabash
2002-03-01
All existing "positive" results on two neutrino double beta decay in different nuclei were analyzed. Using procedure recommended by Particle Data Group weighted average values for half-lives of $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{100}$Mo - $^{100}$Ru ($0^+_1$), $^{116}$Cd, $^{150}$Nd and $^{238}$U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of $^{128}$Te and $^{130}$Te are proposed. We recommend to use these results as most precise and reliable values for half-lives at this moment.
R. J. van den Hoogen
2009-09-01
A formalism for analyzing the complete set of field equations describing Macroscopic Gravity is presented. Using this formalism, a cosmological solution to the Macroscopic Gravity equations is determined. It is found that if a particular segment of the connection correlation tensor is zero and if the macroscopic geometry is described by a flat Robertson-Walker metric, then the effective correction to the averaged Einstein Field equations of General Relativity i.e., the backreaction, is equivalent to a positive spatial curvature term. This investigation completes the analysis of [Phys. Rev. Lett., vol. 95, 151102, (2005)] and the formalism developed provides a possible basis for future studies.
Specification of optical components for a high average-power laser environment
Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.
1997-06-25
Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.
Near-UV to near-IR disk-averaged Earth's spectra from Moon's Earthshine observations
S. Hamdani; L. Arnold; C. Foellmi; J. Berthier; D. Briot; P. Francois; P. Riaud; J. Schneider
2005-10-13
We discuss a series of Earthshine spectra obtained with the NTT/EMMI instrument between 320nm and 1020nm with a resolution of R~450 in the blue and R~250 in the red. These ascending and descending Moon's Earthshine spectra taken from Chile give disk-averaged spectra for two different Earth's phases. The spectra show the ozone (Huggins and Chappuis bands), oxygen and water vapour absorption bands, and also the stronger Rayleigh scattering in the blue. Removing the known telluric absorptions reveals a spectral feature around 700nm which is attributed to the vegetation stronger reflectivity in the near-IR, so-called vegetation red-edge.