Variable Average Absolute Percent Differences
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of5, 2014 |and Terry M.38 4.23ValidationVariable Average Absolute Percent
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models AndyNovobilski,Ph.D.
Fernandez, Thomas
Absolute Percent Error Based Fitness Functions for Evolving Forecast Models Andy computfi~gas a methodof data mining,is its intrinsic ability to drive modelselection accordingto a mixedset of criteria. Basedon natural selection, evolutionary computing utilizes evaluationof candidatesolutions
Plasma dynamics and a significant error of macroscopic averaging
Marek A. Szalek
2005-05-22T23:59:59.000Z
The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.
ASC Report No. 45/2012 A Numerical Study of Averaging Error
Melenk, Jens Markus
polynomials of the same polynomial degree as the finite element solution leads to reliability and efficiency], is a widely used method for gauging errors in finite element methods and steering adaptive mesh refinements and M. Tutz A review of stability and error theory for collocation methods applied to linear boundary
Wei, Shuangqing
for power control and dynamic channel allocation in wireless communication systems. However, due of power control algorithms that minimize the average transmitted power required to achieve a desired outage probability for the link is considered. A number of novel power control algorithms based
Computing Solar Absolute Fluxes
Carlos Allende Prieto
2007-09-14T23:59:59.000Z
Computed color indices and spectral shapes for individual stars are routinely compared with observations for essentially all spectral types, but absolute fluxes are rarely tested. We can confront observed irradiances with the predictions from model atmospheres for a few stars with accurate angular diameter measurements, notably the Sun. Previous calculations have been hampered by inconsistencies and the use of outdated atomic data and abundances. I provide here a progress report on our current efforts to compute absolute fluxes for solar model photospheres. Uncertainties in the solar composition constitute a significant source of error in computing solar radiative fluxes.
Absolute calibration of optical flats
Sommargren, Gary E.
2005-04-05T23:59:59.000Z
The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.
Quantum Error Correction Workshop on
Grassl, Markus
Error Correction Avoiding Errors: Mathematical Model decomposition of the interaction algebra;Quantum Error Correction Designed Hamiltonians Main idea: "perturb the system to make it more stable" · fast (local) control operations = average Hamiltonian with more symmetry (cf. techniques from NMR
Absolute nuclear material assay
Prasad, Manoj K. (Pleasanton, CA); Snyderman, Neal J. (Berkeley, CA); Rowland, Mark S. (Alamo, CA)
2012-05-15T23:59:59.000Z
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K. (Pleasanton, CA); Snyderman, Neal J. (Berkeley, CA); Rowland, Mark S. (Alamo, CA)
2010-07-13T23:59:59.000Z
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Julien M. E. Fraďsse; Daniel Braun
2015-04-13T23:59:59.000Z
We investigate in detail a recently introduced "coherent averaging scheme" in terms of its usefulness for achieving Heisenberg limited sensitivity in the measurement of different parameters. In the scheme, $N$ quantum probes in a product state interact with a quantum bus. Instead of measuring the probes directly and then averaging as in classical averaging, one measures the quantum bus or the entire system and tries to estimate the parameters from these measurement results. Combining analytical results from perturbation theory and an exactly solvable dephasing model with numerical simulations, we draw a detailed picture of the scaling of the best achievable sensitivity with $N$, the dependence on the initial state, the interaction strength, the part of the system measured, and the parameter under investigation.
Aunion, Jose Luis Alcaraz; /Barcelona, IFAE
2010-07-01T23:59:59.000Z
This thesis presents the measurement of the charged current quasi-elastic (CCQE) neutrino-nucleon cross section at neutrino energies around 1 GeV. This measurement has two main physical motivations. On one hand, the neutrino-nucleon interactions at few GeV is a region where existing old data are sparse and with low statistics. The current measurement populates low energy regions with higher statistics and precision than previous experiments. On the other hand, the CCQE interaction is the most useful interaction in neutrino oscillation experiments. The CCQE channel is used to measure the initial and final neutrino fluxes in order to determine the neutrino fraction that disappeared. The neutrino oscillation experiments work at low neutrino energies, so precise measurement of CCQE interactions are essential for flux measurements. The main goal of this thesis is to measure the CCQE absolute neutrino cross section from the SciBooNE data. The SciBar Booster Neutrino Experiment (SciBooNE) is a neutrino and anti-neutrino scattering off experiment. The neutrino energy spectrum works at energies around 1 GeV. SciBooNE was running from June 8th 2007 to August 18th 2008. In that period, the experiment collected a total of 2.65 x 10{sup 20} protons on target (POT). This thesis has used full data collection in neutrino mode 0.99 x 10{sup 20} POT. A CCQE selection cut has been performed, achieving around 70% pure CCQE sample. A fit method has been exclusively developed to determine the absolute CCQE cross section, presenting results in a neutrino energy range from 0.2 to 2 GeV. The results are compatible with the NEUT predictions. The SciBooNE measurement has been compared with both Carbon (MiniBoonE) and deuterium (ANL and BNL) target experiments, showing a good agreement in both cases.
HERA TRANSVERSE POLARIMETER ABSOLUTE SCALE AND ERROR BY RISETIME CALIBRATION
, Deutsches Elektronen Synchrotron, Hamburg, Germany Yerevan Physics Institute, Yerevan, Armenia AND K. P. SCH the spin rotators at the HERÂ #12; 2 V.GHARIBYAN AND K. P. SCH Ë? ULER MES experiment by detecting evolves then naturally through the spin flip driven by synchrotron radiation (the SokolovÂTernov e#ect [8
Absolute neutrino mass measurements
Wolf, Joachim [Karlsruhe Institute of Technology (KIT), IEKP, Postfach 3640, 76021 Karlsruhe (Germany)
2011-10-06T23:59:59.000Z
The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.
Absolute Motion and Gravitational Effects
Cahill, R T
2003-01-01T23:59:59.000Z
The new Process Physics provides a new explanation of space as a quantum foam system in which gravity is an inhomogeneous flow of the quantum foam into matter. An analysis of various experiments demonstrates that absolute motion relative to space has been observed experimentally by Michelson and Morley, Miller, Illingworth, Torr and Kolen, and by DeWitte. The Dayton Miller and Roland DeWitte data also reveal the in-flow of space into matter which manifests as gravity. The in-flow also manifests turbulence and the experimental data confirms this as well, which amounts to the observation of a gravitational wave phenomena. The Einstein assumptions leading to the Special and General Theory of Relativity are shown to be falsified by the extensive experimental data. Contrary to the Einstein assumptions absolute motion is consistent with relativistic effects, which are caused by actual dynamical effects of absolute motion through the quantum foam, so that it is Lorentzian relativity that is seen to be essentially co...
Absolute Motion and Gravitational Effects
Reginald T Cahill
2003-06-29T23:59:59.000Z
The new Process Physics provides a new explanation of space as a quantum foam system in which gravity is an inhomogeneous flow of the quantum foam into matter. An analysis of various experiments demonstrates that absolute motion relative to space has been observed experimentally by Michelson and Morley, Miller, Illingworth, Torr and Kolen, and by DeWitte. The Dayton Miller and Roland DeWitte data also reveal the in-flow of space into matter which manifests as gravity. The in-flow also manifests turbulence and the experimental data confirms this as well, which amounts to the observation of a gravitational wave phenomena. The Einstein assumptions leading to the Special and General Theory of Relativity are shown to be falsified by the extensive experimental data. Contrary to the Einstein assumptions absolute motion is consistent with relativistic effects, which are caused by actual dynamical effects of absolute motion through the quantum foam, so that it is Lorentzian relativity that is seen to be essentially correct.
Compressor performance, absolutely! M. R. Titchener
Titchener, Mark R.
Compressor performance, absolutely! M. R. Titchener Dept of CS, U. of Auck., N.Z. (Email: mark the absolute performance of existing string compressors may be measured. Kolmogorov (1958) recognised://tcode.auckland.ac.nz/~corpus has been used to evaluate the `absolute' performance of a series of popular compressors. The results
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of WntSupport Homelessand RenewableSyntheticSystematic Errors of MiniBooNE
An absolute Johnson noise thermometer
Luca Callegaro; Vincenzo D'Elia; Marco Pisani; Alessio Pollarolo
2009-01-30T23:59:59.000Z
We developed an absolute Johnson noise thermometer (JNT), an instrument to measure the thermodynamic temperature of a sensing resistor, with traceability to voltage, resistance and frequency quantities. The temperature is measured in energy units, and can be converted to SI units (kelvin) with the accepted value of the Boltzmann constant kb; or, conversely, it can be employed to perform measurements at the triple point of water, and obtain a determination of kb. The thermometer is composed of a correlation spectrum analyzer an a calibrated noise source, both constructed around commercial mixed-signal boards. The calibrator generates a pseudorandom noise, by digital synthesis and amplitude scaling with inductive voltage dividers; the signal spectrum is a frequency comb covering the measurement bandwidth. JNT measurements at room temperature are compatible with those of a standard platinum resistance thermometer within the combined uncertainty of 60 ppm. A path towards future improvements of JNT accuracy is also sketched.
Quantum nonequilibrium equalities with absolute irreversibility
Ken Funo; Yűto Murashita; Masahito Ueda
2015-03-30T23:59:59.000Z
We derive quantum nonequilibrium equalities in absolutely irreversible processes. Here by absolute irreversibility we mean that in the backward process the density matrix does not return to the subspace spanned by those eigenvectors that have nonzero weight in the initial density matrix. Since the initial state of a memory and the postmeasurement state of the system are usually restricted to a subspace, absolute irreversibility occurs during the measurement and feedback processes. An additional entropy produced in absolute irreversible processes needs to be taken into account to derive nonequilibrium equalities. We discuss a model of a feedback control on a qubit system to illustrate the obtained equalities. By introducing $N$ heat baths each composed of a qubit and letting them interact with the system, we show how the entropy reduction via feedback control can be converted into work. An explicit form of extractable work in the presence of absolute irreversibility is given.
Precision Absolute Beam Current Measurement of Low Power Electron Beam
Ali, M. M.; Bevins, M. E.; Degtiarenko, P.; Freyberger, A.; Krafft, G. A.
2012-11-01T23:59:59.000Z
Precise measurements of low power CW electron beam current for the Jefferson Lab Nuclear Physics program have been performed using a Tungsten calorimeter. This paper describes the rationale for the choice of the calorimeter technique, as well as the design and calibration of the device. The calorimeter is in use presently to provide a 1% absolute current measurement of CW electron beam with 50 to 500 nA of average beam current and 1-3 GeV beam energy. Results from these recent measurements will also be presented.
Absolute vs. intensity-based emission caps
Ellerman, A. Denny.
Cap-and-trade systems limit emissions to some pre-specified absolute quantity. Intensity-based limits, that restrict emissions to some pre-specified rate relative to input or output, are much more widely used in environmental ...
Emission trading with absolute and intensity caps
Song, Jaemin
2005-01-01T23:59:59.000Z
The Kyoto Protocol introduced emission trading to help reduce the cost of compliances for the Annex B countries that have absolute caps. However, we need to expand the emission trading to cover developing countries in order ...
Organic Solar Cells: Absolute Measurement of Domain Composition...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Organic Solar Cells: Absolute Measurement of Domain Composition and Nanoscale Size Distribution Explains Performance in Solar Cells Organic Solar Cells: Absolute Measurement of...
Seasonal Average Temperature - Hanford Site
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Average Temperature Hanford Meteorological Station Real Time Met Data from Around the Site Current HMS Observations Daily HMS Extremes in Met Data Met and Climate Data Summary...
An Investigation of the Absolute Proper Motions of the SCUSS Catalog
Peng, Xiyan; Wu, Zhenyu; Ma, Jun; Du, Cuihua; Zhou, Xu; Yu, Yong; Tang, Zhenghong; Jiang, Zhaoji; Zou, Hu; Fan, Zhou; Fan, Xiaohui; Smith, Martin C; Jiang, Linhua; Jing, Yipeng; Lattanzi, Mario G; Mclean, Brian J; Lesser, Michael; Nie, Jundan; Shen, Shiyin; Wang, Jiali; Zhang, Tianmeng; Zhou, Zhimin; Wang, Songhu
2015-01-01T23:59:59.000Z
Absolute proper motions for $\\sim$ 7.7 million objects were derived based on data from the South Galactic Cap u-band Sky Survey (SCUSS) and astrometric data derived from uncompressed Digitized Sky Surveys that the Space Telescope Science Institute (STScI) created from the Palomar and UK Schmidt survey plates. We put a great deal of effort into correcting the position-, magnitude-, and color-dependent systematic errors in the derived absolute proper motions. The spectroscopically confirmed quasars were used to test the internal systematic and random error of the proper motions. The systematic errors of the overall proper motions in the SCUSS catalog are estimated as -0.08 and -0.06 mas/yr for {\\mu}{\\alpha} cos {\\delta} and {\\mu}{\\delta}, respectively. The random errors of the proper motions in the SCUSS catalog are estimated independently as 4.2 and 4.4 mas/yr for {\\mu}{\\alpha} cos {\\delta} and {\\mu}{\\delta}. There are no obvious position-, magnitude-, and color-dependent systematic errors of the SCUSS proper ...
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01T23:59:59.000Z
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
A Model of Absolute Autonomy and Power: Toward Group Effects
Hexmoor, Henry
575 2420 fax:479 575 5339 Abstract. We present a model of absolute autonomy and power in agent systems present a model that approximates absolute autonomy and power in agent systems. This absolute sense1 A Model of Absolute Autonomy and Power: Toward Group Effects HENRY HEXMOOR Computer Science
Averaging Hypotheses in Newtonian Cosmology
T. Buchert
1995-12-20T23:59:59.000Z
Average properties of general inhomogeneous cosmological models are discussed in the Newtonian framework. It is shown under which circumstances the average flow reduces to a member of the standard Friedmann--Lema\\^\\i tre cosmologies. Possible choices of global boundary conditions of inhomogeneous cosmologies as well as consequences for the interpretation of cosmological parameters are put into perspective.
1 Absolute Calibration 1.1 Overview
of fit. The quality of the result ing response matrix, and hence, the absolute quantum efficiency plane of the electrons and other storage ring parameters: electron energy, ring current and magnetic to control and readouts the CCDs. The chamber was mounted to the PTB beamline via a ceramic electroisolator
Multiverse Set Theory and Absolutely Undecidable Propositions
Väänänen, Jouko
Multiverse Set Theory and Absolutely Undecidable Propositions Jouko V¨a¨an¨anen University of Helsinki and University of Amsterdam Contents 1 Introduction 2 2 Background 4 3 The multiverse of sets 6 3.1 The one universe case . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 The multiverse
SU-E-T-152: Error Sensitivity and Superiority of a Protocol for 3D IMRT Quality Assurance
Gueorguiev, G [Massachusetts General Hospital, Boston, MA (United States); University of Massachusetts Lowell, Lowell, MA (United States); Cotter, C; Turcotte, J; Sharp, G; Crawford, B [Massachusetts General Hospital, Boston, MA (United States); Mah'D, M [University of Massachusetts Lowell, Lowell, MA (United States)
2014-06-01T23:59:59.000Z
Purpose: To test if the parameters included in our 3D QA protocol with current tolerance levels are able to detect certain errors and show the superiority of 3D QA method over single ion chamber measurements and 2D gamma test by detecting most of the introduced errors. The 3D QA protocol parameters are: TPS and measured average dose difference, 3D gamma test with 3mmDTA/3% test parameters, and structure volume for which the TPS predicted and measured absolute dose difference is greater than 6%. Methods: Two prostate and two thoracic step-and-shoot IMRT patients were investigated. The following errors were introduced to each original treatment plan: energy switched from 6MV to 10MV, linac jaws retracted to 15cmx15cm, 1,2,3 central MLC leaf pairs retracted behind the jaws, single central MLC leaf put in or out of the treatment field, Monitor Units (MU) increased and decreased by 1 and 3%, collimator off by 5 and 15 degrees, detector shifted by 5mm to the left and right, gantry treatment angle off by 5 and 15 degrees. QA was performed on each plan using single ion chamber, 2D ion chamber array for 2D gamma analysis and using IBA's COMPASS system for 3D QA. Results: Out of the three tested QA methods single ion chamber performs the worst not detecting subtle errors. 3D QA proves to be the superior out of the three methods detecting all of introduced errors, except 10MV and 1% MU change, and MLC rotated (those errors were not detected by any QA methods tested). Conclusion: As the way radiation is delivered evolves, so must the QA. We believe a diverse set of 3D statistical parameters applied both to OAR and target plan structures provides the highest level of QA.
Absolute-magnitude distributions of supernovae
Richardson, Dean; Wright, John [Department of Physics, Xavier University of Louisiana, New Orleans, LA 70125 (United States); Jenkins III, Robert L. [Applied Physics Department, Richard Stockton College, Galloway, NJ 08205 (United States); Maddox, Larry, E-mail: drichar7@xula.edu [Department of Chemistry and Physics, Southeastern Louisiana University, Hammond, LA 70402 (United States)
2014-05-01T23:59:59.000Z
The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.
Monte Carlo errors with less errors
Ulli Wolff
2006-11-29T23:59:59.000Z
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Olson, Eric J.
2013-06-11T23:59:59.000Z
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
The 2009 World Average of $?_s$
Siegfried Bethke
2009-08-15T23:59:59.000Z
Measurements of $\\alpha_s$, the coupling strength of the Strong Interaction between quarks and gluons, are summarised and an updated value of the world average of $\\alpha_s (M_Z)$ is derived. Building up on previous reviews, special emphasis is laid on the most recent determinations of $\\alpha_s$. These are obtained from $\\tau$-decays, from global fits of electroweak precision data and from measurements of the proton structure function $\\F_2$, which are based on perturbative QCD calculations up to $O(\\alpha_s^4)$; from hadronic event shapes and jet production in $\\epem$ annihilation, based on $O(\\alpha_s^3) $ QCD; from jet production in deep inelastic scattering and from $\\Upsilon$ decays, based on $O(\\alpha_s^2) $ QCD; and from heavy quarkonia based on unquenched QCD lattice calculations. Applying pragmatic methods to deal with possibly underestimated errors and/or unknown correlations, the world average value of $\\alpha_s (M_Z)$ results in $\\alpha_s (M_Z) = 0.1184 \\pm 0.0007$. The measured values of $\\alpha_s (Q)$, covering energy scales from $Q \\equiv \\mtau = 1.78$ GeV to 209 GeV, exactly follow the energy dependence predicted by QCD and therefore significantly test the concept af Asymptotic Freedom.
The Frame Potential, on Average
Ingemar Bengtsson; Helena Granstrom
2008-10-24T23:59:59.000Z
A SIC consists of N^2 equiangular unit vectors in an N dimensional Hilbert space. The frame potential is a function of N^2 unit vectors. It has a unique global minimum if the vectors form a SIC, and this property has been made use of in numerical searches for SICs. When the vectors form an orbit of the Heisenberg group the frame potential becomes a function of a single fiducial vector. We analytically compute the average of this function over Hilbert space. We also compute averages when the fiducial vector is placed in certain special subspaces defined by the Clifford group.
4, 22832300, 2004 Hemispheric average
Paris-Sud XI, Université de
ACPD 4, 22832300, 2004 Hemispheric average Cl atom concentration U. Platt et al. Title Page U. Platt1 , W. Allen2 , and D. Lowe2 1 Institut f¨ur Umweltphysik, University of Heidelberg, INF 229 February 2004 Accepted: 9 March 2004 Published: 4 May 2004 Correspondence to: U. Platt (ulrich.platt
Liu, X.; Zhao, H. L.; Liu, Y., E-mail: liuyong@ipp.ac.cn; Li, E. Z.; Han, X.; Ti, A.; Hu, L. Q.; Zhang, X. D. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California at Davis, Davis, California 95616 (United States)
2014-09-15T23:59:59.000Z
This paper presents the results of the in situ absolute intensity calibration for the 32-channel heterodyne radiometer on the experimental advanced superconducting tokamak. The hot/cold load method is adopted, and the coherent averaging technique is employed to improve the signal to noise ratio. Measured spectra and electron temperature profiles are compared with those from an independent calibrated Michelson interferometer, and there is a relatively good agreement between the results from the two different systems.
Effects of confining pressure, pore pressure and temperature on absolute permeability. SUPRI TR-27
Gobran, B.D.; Ramey, H.J. Jr.; Brigham, W.E.
1981-10-01T23:59:59.000Z
This study investigates absolute permeability of consolidated sandstone and unconsolidated sand cores to distilled water as a function of the confining pressure on the core, the pore pressure of the flowing fluid and the temperature of the system. Since permeability measurements are usually made in the laboratory under conditions very different from those in the reservoir, it is important to know the effect of various parameters on the measured value of permeability. All studies on the effect of confining pressure on absolute permeability have found that when the confining pressure is increased, the permeability is reduced. The studies on the effect of temperature have shown much less consistency. This work contradicts the past Stanford studies by finding no effect of temperature on the absolute permeability of unconsolidated sand or sandstones to distilled water. The probable causes of the past errors are discussed. It has been found that inaccurate measurement of temperature at ambient conditions and non-equilibrium of temperature in the core can lead to a fictitious permeability reduction with temperature increase. The results of this study on the effect of confining pressure and pore pressure support the theory that as confining pressure is increased or pore pressure decreased, the permeability is reduced. The effects of confining pressure and pore pressure changes on absolute permeability are given explicitly so that measurements made under one set of confining pressure/pore pressure conditions in the laboratory can be extrapolated to conditions more representative of the reservoir.
Reversible (unitary) Ancillary qbits Controlled gates (cX, cZ) #12;Measurement Deterministic Duplication;Decoding use ancillary bits to determine what error occurred #12;Decoding use ancillary bits to determine what error occurred set to 0 if first two bits equal, set to 1 if not #12;Decoding use ancillary bits
Absolute instruments and perfect imaging in geometrical optics
Tyc, Tomas
Absolute instruments and perfect imaging in geometrical optics Tom´as Tyc, Lenka Herz symmetric absolute instruments that provide perfect imaging in the sense of geometrical optics. We derive to propose several new absolute instruments, in particular a lens providing a stigmatic image of an optically
Targeted CT Screening for Lung Cancer using Absolute Risk Prediction
Brent, Roger
Targeted CT Screening for Lung Cancer using Absolute Risk Prediction Stephanie A. Kovalchik skovalch@rand.org FHCRC 2014 Risk Prediction Symposium June 11, 2014 1 #12;Outline · Lung Cancer Epidemiology and Screening · Screening Benefit and Absolute Risk · Absolute Risk Model for Lung Cancer
Absolute Calibration of the Auger Fluorescence Detectors
P. Bauleo; J. Brack; L. Garrard; J. Harton; R. Knapik; R. Meyhandan; A. C. Rovero; A. Tamashiro; D. Warner; for the Auger Collaboration
2005-07-14T23:59:59.000Z
Absolute calibration of the Pierre Auger Observatory fluorescence detectors uses a light source at the telescope aperture. The technique accounts for the ombined effects of all detector components in a single measurement. The calibrated 2.5 m diameter light source fills the aperture, providing uniform illumination to each pixel. The known flux from the light source and the response of the acquisition system give the required calibration for each pixel. In the lab, light source uniformity is studied using CCD images and the intensity is measured relative to NIST-calibrated photodiodes. Overall uncertainties are presently 12%, and are dominated by systematics.
Absolute Energy Capital | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on DeliciousPlasmaP a g eWorksVillage of Brewster, OhioLonghui10 CFR Â§ 1021source History View New PagesAbsecon,Absolute
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01T23:59:59.000Z
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
Abdelhamid Awad Aly Ahmed, Sala
2008-10-10T23:59:59.000Z
QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major... Subject: Computer Science QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY...
Thermodynamics of error correction
Pablo Sartori; Simone Pigolotti
2015-04-24T23:59:59.000Z
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and dissipated work of the process. Its derivation is based on the second law of thermodynamics, hence its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Absolute Maximal Entanglement and Quantum Secret Sharing
Helwig, Wolfram; Riera, Arnau; Latorre, José I; Lo, Hoi-Kwong
2012-01-01T23:59:59.000Z
We study the existence of absolutely maximally entangled (AME) states in quantum mechanics and its applications to quantum information. AME states are characterized by being maximally entangled for all bipartitions of the system and exhibit genuine multipartite entanglement. With such states, we present a novel parallel teleportation protocol which teleports multiple quantum states between groups of senders and receivers. The notable features of this protocol are that (i) the partition into senders and receivers can be chosen after the state has been distributed, and (ii) one group has to perform joint quantum operations while the parties of the other group only have to act locally on their system. We also prove the equivalence between pure state quantum secret sharing schemes and AME states with an even number of parties. This equivalence implies the existence of AME states for an arbitrary number of parties based on known results about the existence of quantum secret sharing schemes.
Experiments for the absolute neutrino mass measurement
Markus Steidl
2009-06-02T23:59:59.000Z
Experimental results and perspectives of different methods to measure the absolute mass scale of neutrinos are briefly reviewed. The mass sensitivities from cosmological observations, double beta decay searches and single beta decay spectroscopy differ in sensitivity and model dependance. Next generation experiments in the three fields reach the sensitivity for the lightest mass eigenstate of $m_1<0.2eV$, which will finally answer the question if neutrino mass eigenstates are degenerate. This sensitivity is also reached by the only model-independent approach of single beta decay (KATRIN experiment). For higher sensitivities on cost of model-dependance the neutrinoless double beta decay search and cosmological observation have to be applied. Here, in the next decade sensitivities are approached with the potential to test inverted hierarchy models.
Relativistic Spacetime Based on Absolute Background
ChiYi Chen
2015-09-19T23:59:59.000Z
Based on the consideration of naturalness and physical facts in Einstein's theories of relativity, a nontrivial spacetime physical picture, which has a slight difference from the standard one, is introduced by making a further distinction on the absolute background of spacetime and the relative length or duration of base units of spacetime. In this picture, the coordinate base units in gravity-induced spacetime metric are defined by the standard clock and ruler equipped by the observer, and duplicated onto the every position of the whole universe. In contrast, the proper base units of spacetime in gravitational field are defined by the length and duration of physical events intervals in the same-type standard clock and ruler really located at every position of the universe. In principle, the reading number of the standard clock is counted by the undergone times of unit intervals defined depending on a certain kind of proper events. But the size of the base units of spacetime is essentially depicted by the length of the line segment, which is cut from the absolute background of spacetime by the proper events of unit interval. The effect of gravitation is just to change the length of this segment for base spacetime units. On the basis of such a physical picture of spacetime, in a fairly natural way we re-derive a new classical dynamical equation which satisfies a more realistic and moderately general principle of relativity. To further examine this physical picture including of gravitation and spacetime, we also reinterpret the gravitational redshifts for solar gravity tests.
"Variable","Average Absolute Percent Differences","Percent of Projections Over- Estimated"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS National FuelYancey County, NorthDiesel3, 2013TWO Washington, D.C.43Totala.33.9855
Paris-Sud XI, UniversitĂ© de
the radiofrequency (RF) content of an optical radiation field E in a sensor bandwidth by mixing it with a LO field camera, (Andor IXON 885+, readout rate S/(2) = 20 Hz). The main optical radiation field is provided
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D. [Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States) and Center for Theoretical Physics, Laboratory for Nuclear Science, and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts 02155 (United States)
2007-09-15T23:59:59.000Z
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
Measurement of the Absolute Branching Fraction of D0 to K- pi+
Aubert, B.; Bona, M.; Boutigny, D.; Karyotakis, Y.; Lees, J.P.; Poireau, V.; Prudent, X.; Tisserand, V.; Zghiche, A.; /Annecy, LAPP; Garra Tico, J.; Grauges, E.; /Barcelona U., ECM; Lopez, L.; Palano, A.; /Bari U.; Eigen, G.; Ofte, I.; Stugu, B.; Sun, L.; /Bergen U.; Abrams, G.S.; Battaglia, M.; Brown, D.N.; Button-Shafer, J.; /LBL, Berkeley
2007-04-25T23:59:59.000Z
The authors measure the absolute branching fraction for D{sup 0} {yields} K{sup -} {pi}{sup +} using partial reconstruction of {bar B}{sup 0} {yields} D*{sup +}X{ell}{sup -}{bar {nu}}{sub {ell}} decays, in which only the charged lepton and the pion from the decay D*{sup +} {yields} D{sup 0}{pi}{sup +} are used. Based on a data sample of 230 million B{bar B} pairs collected at the {Upsilon}(4S) resonance with the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, they obtain {Beta}(D{sup 0} {yields} K{sup -}{pi}{sup +}) = (4.007 {+-} 0.037 {+-} 0.070)%, where the first error is statistical and the second error is systematic.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K. (Pleasanton, CA); Snyderman, Neal J. (Berkeley, CA); Rowland, Mark S. (Alamo, CA)
2012-06-05T23:59:59.000Z
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute detector quantum-efficiency measurements using correlated photons
Migdall, Alan
metrologia Absolute detector quantum-efficiency measurements using correlated photons A. L. Migdall correlated photons for radiometric purposes has been set up at the National Institute of Standards and Technology (NIST). We use pairs of correlated photons to produce spatial maps of the absolute efficiency
Safety Logics I: Absolute Safety Zhisheng Huang and John Bell #
Huang, Zhisheng
extensions of it. We then give an example of reasoning about safety in nuclear power stations, and concludeSafety Logics I: Absolute Safety Zhisheng Huang and John Bell # Applied Logic Group Department}@dcs.qmw.ac.uk Abstract In this paper we distinguish between absolute safety and normative safety, and develop a formal
Error Analysis of Heat Transfer for Finned-Tube Heat-Exchanger Text-Board
Chen, Y.; Zhang, J.
2006-01-01T23:59:59.000Z
.5 PLn T T T=? + ? + Then () () 2 2 2'2 2 2 vqb 7235.425 8.2 0.0057 2PPT TT?? ??=++ + ????1gAPt? (13) We substitute the equation (13) into equation (10), and gain the max absolute error of air moisture content: () () 2 22 2'2 22 qb 1 g 0...
A New World Average Value for the Neutron Lifetime
A. P. Serebrov; A. K. Fomin
2010-05-27T23:59:59.000Z
The analysis of the data on measurements of the neutron lifetime is presented. A new most accurate result of the measurement of neutron lifetime [Phys. Lett. B 605 (2005) 72] 878.5 +/- 0.8 s differs from the world average value [Phys. Lett. B 667 (2008) 1] 885.7 +/- 0.8 s by 6.5 standard deviations. In this connection the analysis and Monte Carlo simulation of experiments [Phys. Lett. B 483 (2000) 15] and [Phys. Rev. Lett. 63 (1989) 593] is carried out. Systematic errors of about -6 s are found in each of the experiments. The summary table for the neutron lifetime measurements after corrections and additions is given. A new world average value for the neutron lifetime 879.9 +/- 0.9 s is presented.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL
Rudnyi, Evgenii B.
to apply. The algorithm to maximize a likelihood function in the case of a non-linear physico - the same variances of errors 3.1. One-way classification 3.2. Linear regression 4. Real case (vaporizationSTATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL E.B. Rudnyi Department of Chemistry
Spectral averaging techniques for Jacobi matrices
Rafael del Rio; Carmen Martinez; Hermann Schulz-Baldes
2008-02-20T23:59:59.000Z
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23 362 334 318Cubic Feet) YearSalesNew2003 Detailed Tables .Errors of Nonobservation Finally,
Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23 362 334 318 706Production% of41.1Diesel prices increase nationally TheCold Fusion Error Unexpected
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article) | SciTech Connect Journal Article: X-rayContract Documents PPPL The files|DisclaimersFeature featured2Cold Fusion Error
Uncertainty quantification and error analysis
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01T23:59:59.000Z
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Comparative vs. Absolute Performance Assessment with Environmental Sustainability Metrics
High, Karen
Comparative vs. Absolute Performance Assessment with Environmental Sustainability Metrics Xun Jin Different goals and potential audiences determine that two types of environmental performance assessments metrics can be partitioned into two camps. One suite of metrics aim to assess the environmental
General Relativity and Spatial Flows: I. Absolute Relativistic Dynamics
Tom Martin
2000-06-08T23:59:59.000Z
Two complementary and equally important approaches to relativistic physics are explained. One is the standard approach, and the other is based on a study of the flows of an underlying physical substratum. Previous results concerning the substratum flow approach are reviewed, expanded, and more closely related to the formalism of General Relativity. An absolute relativistic dynamics is derived in which energy and momentum take on absolute significance with respect to the substratum. Possible new effects on satellites are described.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15T23:59:59.000Z
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Optimization Online - Dual Averaging Methods for Regularized ...
Lin Xiao
2010-04-15T23:59:59.000Z
Apr 15, 2010 ... ... simple minimization problem that involves the running average of all past subgradients of the loss function and the whole regularization term, ...
Sandia National Laboratories: increasing average wind turbine...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
increasing average wind turbine power rating Latest Version of the Composite Materials Database Available for Download On December 3, 2014, in Energy, Materials Science, News, News...
The Absolute Magnitude of RR Lyrae Stars Derived from the Hipparcos Catalogue
Takuji Tsujimoto; Masanori Miyamoto; Yuzuru Yoshii
1997-11-04T23:59:59.000Z
The present determination of the absolute magnitude $M_V(RR)$ of RR Lyrae stars is twofold, relying upon Hipparcos proper motions and trigonometric parallaxes separately. First, applying the statistical parallax method to the proper motions, we find $=0.69\\pm0.10$ for 99 halo RR Lyraes with $$ =--1.58. Second, applying the Lutz-Kelker correction to the RR Lyrae HIP95497 with the most accurately measured parallax, we obtain $M_V(RR)$=(0.58--0.68)$^{+0.28}_{-0.31}$ at [Fe/H]=--1.6. Furthermore, allowing full use of low accuracy and negative parallaxes as well for 125 RR Lyraes with -- 2.49$\\leq$[Fe/H]$\\leq$0.07, the maximum likelihood estimation yields the relation, $M_V(RR)$=(0.59$\\pm$0.37)+(0.20$\\pm$0.63)([Fe/H]+1.60), which formally agrees with the recent preferred relation. The same estimation yields again $$ = $0.65\\pm0.33$ for the 99 halo RR Lyraes. Although the formal errors in the latter three parallax estimates are rather large, all of the four results suggest the fainter absolute magnitude, $M_V(RR)$$\\approx$0.6--0.7 at [Fe/H]=--1.6. The present results still provide the lower limit on the age of the universe which is inconsistent with a flat, matter-dominated universe and current estimates of the Hubble constant.
Averages in vector spaces over finite fields
Wright J.; Carbery A.; Stones B.
2008-01-01T23:59:59.000Z
We study the analogues of the problems of averages and maximal averages over a surface in R-n when the euclidean structure is replaced by that of a vector space over a finite field, and obtain optimal results in a number ...
MESOSCALE AVERAGING OF NUCLEATION AND GROWTH MODELS
Burger, Martin
MESOSCALE AVERAGING OF NUCLEATION AND GROWTH MODELS MARTIN BURGER , VINCENZO CAPASSO , AND LIVIO-Kolmogorov relations for the degree of crystallinity. By relating the computation of expected values to mesoscale averaging, we obtain a suitable description of the process at the mesoscale. We show how the variance
Absolute Lineshifts - A new diagnostic for stellar hydrodynamics
Dainis Dravins
2003-02-28T23:59:59.000Z
For hydrodynamic model atmospheres, absolute lineshifts are becoming an observable diagnostic tool beyond the classical ones of line-strength, -width, -shape, and -asymmetry. This is the wavelength displacement of different types of spectral lines away from the positions naively expected from the Doppler shift caused by stellar radial motion. Caused mainly by correlated velocity and brightness patterns in granular convection, such absolute lineshifts could in the past be studied only for the Sun (since the relative Sun-Earth motion, and the ensuing Doppler shift is known). For other stars, this is now becoming possible thanks to three separate developments: (a) Astrometric determination of stellar radial motion; (b) High-resolution spectrometers with accurate wavelength calibration, and (c) Accurate laboratory wavelengths for several atomic species. Absolute lineshifts offer a tool to segregate various 2- and 3-dimensional models, and to identify non-LTE effects in line formation.
Absolute Lineshifts - A new diagnostic for stellar hydrodynamics
Dravins, D
2003-01-01T23:59:59.000Z
For hydrodynamic model atmospheres, absolute lineshifts are becoming an observable diagnostic tool beyond the classical ones of line-strength, -width, -shape, and -asymmetry. This is the wavelength displacement of different types of spectral lines away from the positions naively expected from the Doppler shift caused by stellar radial motion. Caused mainly by correlated velocity and brightness patterns in granular convection, such absolute lineshifts could in the past be studied only for the Sun (since the relative Sun-Earth motion, and the ensuing Doppler shift is known). For other stars, this is now becoming possible thanks to three separate developments: (a) Astrometric determination of stellar radial motion; (b) High-resolution spectrometers with accurate wavelength calibration, and (c) Accurate laboratory wavelengths for several atomic species. Absolute lineshifts offer a tool to segregate various 2- and 3-dimensional models, and to identify non-LTE effects in line formation.
Syam Kumar, S.A., E-mail: skppm@rediffmail.com [Department of Medical Physics, Cancer Institute (WIA), Adyar, Chennai, Tamil Nadu (India); Sukumar, Prabakar; Sriram, Padmanaban; Rajasekaran, Dhanabalan; Aketi, Srinu; Vivekanandan, Nagarajan [Department of Medical Physics, Cancer Institute (WIA), Adyar, Chennai, Tamil Nadu (India)
2012-01-01T23:59:59.000Z
The recalculation of 1 fraction from a patient treatment plan on a phantom and subsequent measurements have become the norms for measurement-based verification, which combines the quality assurance recommendations that deal with the treatment planning system and the beam delivery system. This type of evaluation has prompted attention to measurement equipment and techniques. Ionization chambers are considered the gold standard because of their precision, availability, and relative ease of use. This study evaluates and compares 5 different ionization chambers: phantom combinations for verification in routine patient-specific quality assurance of RapidArc treatments. Fifteen different RapidArc plans conforming to the clinical standards were selected for the study. Verification plans were then created for each treatment plan with different chamber-phantom combinations scanned by computed tomography. This includes Medtec intensity modulated radiation therapy (IMRT) phantom with micro-ionization chamber (0.007 cm{sup 3}) and pinpoint chamber (0.015 cm{sup 3}), PTW-Octavius phantom with semiflex chamber (0.125 cm{sup 3}) and 2D array (0.125 cm{sup 3}), and indigenously made Circular wax phantom with 0.6 cm{sup 3} chamber. The measured isocenter absolute dose was compared with the treatment planning system (TPS) plan. The micro-ionization chamber shows more deviations when compared with semiflex and 0.6 cm{sup 3} with a maximum variation of -4.76%, -1.49%, and 2.23% for micro-ionization, semiflex, and farmer chambers, respectively. The positive variations indicate that the chamber with larger volume overestimates. Farmer chamber shows higher deviation when compared with 0.125 cm{sup 3}. In general the deviation was found to be <1% with the semiflex and farmer chambers. A maximum variation of 2% was observed for the 0.007 cm{sup 3} ionization chamber, except in a few cases. Pinpoint chamber underestimates the calculated isocenter dose by a maximum of 4.8%. Absolute dose measurements using the semiflex ionization chamber with intermediate volume (0.125 cm{sup 3}) shows good agreement with the TPS calculated among the detectors used in this study. Positioning is very important when using smaller volume chambers because they are more sensitive to geometrical errors within the treatment fields. It is also suggested to average the dose over the sensitive volume for larger-volume chambers. The ionization chamber-phantom combinations used in this study can be used interchangeably for routine RapidArc patient-specific quality assurance with a satisfactory accuracy for clinical practice.
Electron Cyclotron Emission Measurements on JET: Michelson Interferometer, New Absolute Calibration and Determination of Electron Temperature
Thermal ghost imaging with averaged speckle patterns
Shapiro, Jeffrey H.
We present theoretical and experimental results showing that a thermal ghost imaging system can produce images of high quality even when it uses detectors so slow that they respond only to intensity-averaged (that is, ...
Selling Geothermal Systems The "Average" Contractor
Selling Geothermal Systems #12;The "Average" Contractor · History of sales procedures · Manufacturer Driven Procedures · What makes geothermal technology any harder to sell? #12;"It's difficult to sell a geothermal system." · It should
Spacetime Average Density (SAD) cosmological measures
Page, Don N., E-mail: profdonpage@gmail.com [Department of Physics, 4-183 CCIS, University of Alberta, Edmonton, Alberta, T6G 2E1 Canada (Canada)
2014-11-01T23:59:59.000Z
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES
CALIFORNIA ENERGY COMMISSION STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES 2005 TO 2018 Mignon Marks Principal Author Mignon Marks Project Manager David Ashuckian Manager ELECTRICITY ANALYSIS OFFICE Sylvia Bender Acting Deputy Director ELECTRICITY SUPPLY DIVISION B.B. Blevins Executive Director
Distributed Averaging Via Lifted Markov Chains
Jung, Kyomin
Motivated by applications of distributed linear estimation, distributed control, and distributed optimization, we consider the question of designing linear iterative algorithms for computing the average of numbers in a ...
Self-averaging characteristics of spectral fluctuations
Petr Braun; Fritz Haake
2014-10-20T23:59:59.000Z
The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found for the smoothed correlator, apart from noise which shrinks like $1\\over\\sqrt N$ as the dimension $N$ of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as $N\\to\\infty$ such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order ${1\\over N^2}$) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). --- We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.
Measuring absolute infrared spectral radiance with correlated photons: new arrangements
Migdall, Alan
metrologia Measuring absolute infrared spectral radiance with correlated photons: new arrangements radiance using correlated photons are presented. The method has the remarkable feature that it allows be measured using correlated photons [1-4]. That work outlined some of the useful features of the method. One
absolute reaction rate theory 156 accelerated cooled steels 3538
Cambridge, University of
±4 dislocation density 26±9, 70±1 distribution of carbon 71±2 driving forces 202±4 dual phase steels 358absolute reaction rate theory 156 accelerated cooled steels 353±8 acicular ferrites 237±76 forging steels 273±4 growth 240±3 inoculation 267±75 lattice matches 245 morphology 237±40 nucleation 243
Measurements of absolute single differential cross section (SDCS)
Zouros, Theo
, Office of Basic Energy Sciences, OfÂ fice of Energy research, U.S. Department of Energy, HuÂ man Capital and Mobility Program of the EU and the Greek Ministry of Industry, Energy and Technology. SIMION Tuning EnergyMeasurements of absolute single differential cross section (SDCS) [Left] and percentage energy res
Absolute Calibration of a Large-diameter Light Source
Brack, J T; Dorofeev, A; Gookin, B; Harton, J L; Petrov, Y; Rovero, A C
2013-01-01T23:59:59.000Z
A method of absolute calibration for large aperture optical systems is presented, using the example of the Pierre Auger Observatory fluorescence detectors. A 2.5 m diameter light source illuminated by an ultra--violet light emitting diode is calibrated with an overall uncertainty of 2.1 % at a wavelength of 365 nm.
Double Beta Decay and the Absolute Neutrino Mass Scale
Carlo Giunti
2003-08-20T23:59:59.000Z
After a short review of the current status of three-neutrino mixing, the implications for the values of neutrino masses are discussed. The bounds on the absolute scale of neutrino masses from Tritium beta-decay and cosmological data are reviewed. Finally, we discuss the implications of three-neutrino mixing for neutrinoless double-beta decay.
see Type I decision error see Type II decision error
-1, 12, 22; 4-11; 5-46 to 51; 7-7; 8-1, 2, 15, 16, 22, 24, 27; A-5; N-16 areas 2-5 HSA/scoping 2 INDEX see Type I decision error see Type II decision error 91b material 3-5 Amin area-25; 8-11, 17 area evaluation & HSA 3-11 classification 2-4, 5, 17, 28; 4-11 contaminated 2-3 land
DATA COMPRESSION USING WAVELETS: ERROR ...
1910-90-11T23:59:59.000Z
algorithms that introduce differences between the original and compressed data in ... to choose an error metric that parallels the human visual system, so that image .... signal data along a communications channel, one sends integer codes that ...
The Challenge of Quantum Error Correction.
Fominov, Yakov
in the design of physical bits. #12;What we need Hardware requirements: 1. Many 103-104 / R individual bits (R flip classical error b. Phase error 0exp( ( ) )z i E t dt = - Fluctuates 1. Need hardware error #12;Classical error correction by the software and hardware. , / 2 0 Hardware error correction: Ising
Absolute x-ray dosimetry on a synchrotron medical beam line with a graphite calorimeter
Harty, P. D., E-mail: Peter.Harty@arpansa.gov.au; Ramanathan, G.; Butler, D. J.; Johnston, P. N. [Australian Radiation Protection and Nuclear Safety Agency, Yallambie, Victoria 3085 (Australia)] [Australian Radiation Protection and Nuclear Safety Agency, Yallambie, Victoria 3085 (Australia); Lye, J. E. [Australian Radiation Protection and Nuclear Safety Agency, Yallambie, Victoria 3085, Australia and Australian Clinical Dosimetry Service, Yallambie, Victoria 3085 (Australia)] [Australian Radiation Protection and Nuclear Safety Agency, Yallambie, Victoria 3085, Australia and Australian Clinical Dosimetry Service, Yallambie, Victoria 3085 (Australia); Hall, C. J. [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168 (Australia)] [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168 (Australia); Stevenson, A. W. [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and CSIRO, Materials Science and Engineering, Clayton Sth Victoria 3169 (Australia)] [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and CSIRO, Materials Science and Engineering, Clayton Sth Victoria 3169 (Australia)
2014-05-15T23:59:59.000Z
Purpose: The absolute dose rate of the Imaging and Medical Beamline (IMBL) on the Australian Synchrotron was measured with a graphite calorimeter. The calorimetry results were compared to measurements from the existing free-air chamber, to provide a robust determination of the absolute dose in the synchrotron beam and provide confidence in the first implementation of a graphite calorimeter on a synchrotron medical beam line. Methods: The graphite calorimeter has a core which rises in temperature when irradiated by the beam. A collimated x-ray beam from the synchrotron with well-defined edges was used to partially irradiate the core. Two filtration sets were used, one corresponding to an average beam energy of about 80 keV, with dose rate about 50?Gy/s, and the second filtration set corresponding to average beam energy of 90 keV, with dose rate about 20 Gy/s. The temperature rise from this beam was measured by a calibrated thermistor embedded in the core which was then converted to absorbed dose to graphite by multiplying the rise in temperature by the specific heat capacity for graphite and the ratio of cross-sectional areas of the core and beam. Conversion of the measured absorbed dose to graphite to absorbed dose to water was achieved using Monte Carlo calculations with the EGSnrc code. The air kerma measurements from the free-air chamber were converted to absorbed dose to water using the AAPM TG-61 protocol. Results: Absolute measurements of the IMBL dose rate were made using the graphite calorimeter and compared to measurements with the free-air chamber. The measurements were at three different depths in graphite and two different filtrations. The calorimetry measurements at depths in graphite show agreement within 1% with free-air chamber measurements, when converted to absorbed dose to water. The calorimetry at the surface and free-air chamber results show agreement of order 3% when converted to absorbed dose to water. The combined standard uncertainty is 3.9%. Conclusions: The good agreement of the graphite calorimeter and free-air chamber results indicates that both devices are performing as expected. Further investigations at higher dose rates than 50?Gy/s are planned. At higher dose rates, recombination effects for the free-air chamber are much higher and expected to lead to much larger uncertainties. Since the graphite calorimeter does not have problems associated with dose rate, it is an appropriate primary standard detector for the synchrotron IMBL x rays and is the more accurate dosimeter for the higher dose rates expected in radiotherapy applications.
Using CO2 spatial variability to quantify representation errors of satellite CO2 retrievals
Michalak, Anna M.
global data of column- averaged CO2 dry-air mole fraction (XCO2) at high spatial resolutions. These dataUsing CO2 spatial variability to quantify representation errors of satellite CO2 retrievals A. A 2008; published 29 August 2008. [1] Satellite measurements of column-averaged CO2 dry- air mole
,"Housing Units1","Average Square Footage Per Housing Unit",...
U.S. Energy Information Administration (EIA) Indexed Site
6 Average Square Footage of Mobile Homes, by Housing Characteristics, 2009" " Final" ,"Housing Units1","Average Square Footage Per Housing Unit",,,"Average Square Footage Per...
Stochastic Nash Equilibrium Problems: Sample Average ...
2010-01-22T23:59:59.000Z
convergence of stationary points of sample average optimization problems, see for .... (c) Finally we model the competition in the electricity spot market as a ...... out to be p(Q, ?), Ci(qi) denotes the total cost for producing qi amount of electricity
Absolute Source Activity Measurement with a Single Detector
Bikit, I.; Nemes, T.; Mrdja, D.; Forkapic, S. [Department of Physics, Faculty of Sciences, University of Novi Sad, Trg Dositeja Obradovica 4, 21 000 Novi Sad (Serbia)
2009-08-26T23:59:59.000Z
In the present paper the activity of {sup 60}Co source was measured using the full absorption, sum and random coincidences (pile up) peaks and the total spectrum area in the gamma spectra. By the exact treatment of the chance coincidence and pile-up events, surprisingly good results were obtained. With the source on the detector end-cap (when the angular correlation effects are negligible), this simple method yields absolute activity values deviating from the reference activity for about 1 percent.
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01T23:59:59.000Z
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Unequal error protection of subband coded bits
Devalla, Badarinath
1994-01-01T23:59:59.000Z
Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...
Absolute photoionization cross-section of the propargyl radical
Savee, John D.; Welz, Oliver; Taatjes, Craig A.; Osborn, David L. [Sandia National Laboratories, Combustion Research Facility, Livermore, California 94551 (United States); Soorkia, Satchin [Institut des Sciences Moleculaires d'Orsay, Universite Paris-Sud 11, Orsay (France); Selby, Talitha M. [Department of Chemistry, University of Wisconsin, Washington County Campus, West Bend, Wisconsin 53095 (United States)
2012-04-07T23:59:59.000Z
Using synchrotron-generated vacuum-ultraviolet radiation and multiplexed time-resolved photoionization mass spectrometry we have measured the absolute photoionization cross-section for the propargyl (C{sub 3}H{sub 3}) radical, {sigma}{sub propargyl}{sup ion}(E), relative to the known absolute cross-section of the methyl (CH{sub 3}) radical. We generated a stoichiometric 1:1 ratio of C{sub 3}H{sub 3} : CH{sub 3} from 193 nm photolysis of two different C{sub 4}H{sub 6} isomers (1-butyne and 1,3-butadiene). Photolysis of 1-butyne yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(26.1{+-}4.2) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(23.4{+-}3.2) Mb, whereas photolysis of 1,3-butadiene yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(23.6{+-}3.6) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(25.1{+-}3.5) Mb. These measurements place our relative photoionization cross-section spectrum for propargyl on an absolute scale between 8.6 and 10.5 eV. The cross-section derived from our results is approximately a factor of three larger than previous determinations.
Non-Gaussian numerical errors versus mass hierarchy
Y. Meurice; M. B. Oktay
2000-05-12T23:59:59.000Z
We probe the numerical errors made in renormalization group calculations by varying slightly the rescaling factor of the fields and rescaling back in order to get the same (if there were no round-off errors) zero momentum 2-point function (magnetic susceptibility). The actual calculations were performed with Dyson's hierarchical model and a simplified version of it. We compare the distributions of numerical values obtained from a large sample of rescaling factors with the (Gaussian by design) distribution of a random number generator and find significant departures from the Gaussian behavior. In addition, the average value differ (robustly) from the exact answer by a quantity which is of the same order as the standard deviation. We provide a simple model in which the errors made at shorter distance have a larger weight than those made at larger distance. This model explains in part the non-Gaussian features and why the central-limit theorem does not apply.
Communication error detection using facial expressions
Wang, Sy Bor, 1976-
2008-01-01T23:59:59.000Z
Automatic detection of communication errors in conversational systems typically rely only on acoustic cues. However, perceptual studies have indicated that speakers do exhibit visual communication error cues passively ...
Harmonic Analysis Errors in Calculating Dipole,
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to reduce the harmonic field calculation errors. A conformal transfor- mation of a multipole magnet into a dipole reduces these errors. Dipole Magnet Calculations A triangular...
Impact Ionization Model Using Average Energy and Average Square Energy of Distribution Function
Dunham, Scott
Impact Ionization Model Using Average Energy and Average Square Energy of Distribution Function Ken relaxation length, v sat ř h''i (¸ 0:05Żm), the energy distribution function is not well described calculation of impact ionization coefficient requires the use of a high energy distribution function because
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22T23:59:59.000Z
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Absolute Efficiency Calibration of a Beta-Gamma Detector
Cooper, Matthew W.; Ely, James H.; Haas, Derek A.; Hayes, James C.; McIntyre, Justin I.; Lidey, Lance S.; Schrom, Brian T.
2013-04-10T23:59:59.000Z
Abstract- Identification and quantification of nuclear events such as the Fukushima reactor failure and nuclear explosions rely heavily on the accurate measurement of radioxenon releases. One radioxenon detection method depends on detecting beta-gamma coincident events paired with a stable xenon measurement to determine the concentration of a plume. Like all measurements, the beta-gamma method relies on knowing the detection efficiency for each isotope measured. Several methods are commonly used to characterize the detection efficiency for a beta-gamma detector. The most common method is using a NIST certified sealed source to determine the efficiency. A second method determines the detection efficiencies relative to an already characterized detector. Finally, a potentially more accurate method is to use the expected sample to perform an absolute efficiency calibration; in the case of a beta-gamma detector, this relies on radioxenon gas samples. The complication of the first method is it focuses only on the gamma detectors and does not offer a solution for determining the beta efficiency. The second method listed is not similarly constrained, however it relies on another detector to have a well-known efficiency calibration. The final method using actual radioxenon samples to make an absolute efficiency determination is the most desirable, but until recently it was not possible to produce all four isotopically pure radioxenon. The production, by University of Texas (UT), of isotopically pure radioxenon has allowed the beta-gamma detectors to be calibrated using the absolute efficiency method. The first four radioxenon isotope calibration will be discussed is this paper.
Absolute beam emittance measurements at RHIC using ionization profile monitors
Minty, M. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Connolly, R [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Liu, C. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Summers, T. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.; Tepikian, S. [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.
2014-08-15T23:59:59.000Z
In the past, comparisons between emittance measurements obtained using ionization profile monitors, Vernier scans (using as input the measured rates from the zero degree counters, or ZDCs), the polarimeters and the Schottky detectors evidenced significant variations of up to 100%. In this report we present studies of the RHIC ionization profile monitors (IPMs). After identifying and correcting for two systematic instrumental errors in the beam size measurements, we present experimental results showing that the remaining dominant error in beam emittance measurements at RHIC using the IPMs was imprecise knowledge of the local beta functions. After removal of the systematic errors and implementation of measured beta functions, precise emittance measurements result. Also, consistency between the emittances measured by the IPMs and those derived from the ZDCs was demonstrated.
Time-dependent angularly averaged inverse transport
Guillaume Bal; Alexandre Jollivet
2009-05-07T23:59:59.000Z
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.
Quantum bath refrigeration towards absolute zero: unattainability principle challenged
Michal Kolá?; David Gelbwaser-Klimovsky; Robert Alicki; Gershon Kurizki
2012-08-05T23:59:59.000Z
A minimal model of a quantum refrigerator (QR), i.e. a periodically phase-flipped two-level system permanently coupled to a finite-capacity bath (cold bath) and an infinite heat dump (hot bath), is introduced and used to investigate the cooling of the cold bath towards the absolute zero (T=0). Remarkably, the temperature scaling of the cold-bath cooling rate reveals that it does not vanish as T->0 for certain realistic quantized baths, e.g. phonons in strongly disordered media (fractons) or quantized spin-waves in ferromagnets (magnons). This result challenges Nernst's third-law formulation known as the unattainability principle.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William (Overland Park, KS)
2008-10-21T23:59:59.000Z
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William (Overland Park, KS)
2007-10-02T23:59:59.000Z
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William (Overland Park, KS)
2009-09-01T23:59:59.000Z
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William (Overland Park, KS)
2007-07-17T23:59:59.000Z
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William (Overland Park, KS)
2007-07-03T23:59:59.000Z
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.
LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.
2004-07-26T23:59:59.000Z
We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.
Absolute absorption on the potassium D lines:theory and experiment
Hanley, Ryan K; Hughes, Ifan G; Cornish, Simon L
2015-01-01T23:59:59.000Z
We present a detailed study of the absolute Doppler-broadened absorption of a probe beam scanned across the potassium D lines in a thermal vapour. Spectra using a weak probe were measured on the 4S $\\rightarrow$ 4P transition and compared to the theoretical model of the electric susceptibility detailed by Zentile et al. (2015) in the code named ElecSus. Comparisons were also made on the 4S $\\rightarrow$ 5P transition with an adapted version of ElecSus. This is the first experimental test of ElecSus on an atom with a ground state hyperfine splitting smaller than that of the Doppler width. An excellent agreement was found between ElecSus and experimental measurements at a variety of temperatures with rms errors of $\\sim 10^{-3}$. We have also demonstrated the use of ElecSus as an atomic vapour thermometry tool, and present a possible new measurement technique of transition decay rates which we predict to have a precision $\\sim$ 3 kHz.
Long-term average performance benefits of parabolic trough improvements
Gee, R.; Gaul, H.W.; Kearney, D.; Rabl, A.
1980-03-01T23:59:59.000Z
Improved parabolic trough concentrating collectors will result from better design, improved fabrication techniques, and the development and utilization of improved materials. The difficulty of achieving these improvements varies as does their potential for increasing parabolic trough performance. The purpose of this analysis is to quantify the relative merit of various technology advancements in improving the long-term average performance of parabolic trough concentrating collectors. The performance benefits of improvements are determined as a function of operating temperature for north-south, east-west, and polar mounted parabolic troughs. The results are presented graphically to allow a quick determination of the performance merits of particular improvements. Substantial annual energy gains are shown to be attainable. Of the improvements evaluated, the development of stable back-silvered glass reflective surfaces offers the largest performance gain for operating temperatures below 150/sup 0/C. Above 150/sup 0/C, the development of trough receivers that can maintain a vacuum is the most significant potential improvement. The reduction of concentrator slope errors also has a substantial performance benefit at high operating temperatures.
Absolute Values of Neutrino Masses: Status and Prospects
S. M. Bilenky; C. Giunti; J. A. Grifols; E. Masso
2003-03-27T23:59:59.000Z
Compelling evidences in favor of neutrino masses and mixing obtained in the last years in Super-Kamiokande, SNO, KamLAND and other neutrino experiments made the physics of massive and mixed neutrinos a frontier field of research in particle physics and astrophysics. There are many open problems in this new field. In this review we consider the problem of the absolute values of neutrino masses, which apparently is the most difficult one from the experimental point of view. We discuss the present limits and the future prospects of beta-decay neutrino mass measurements and neutrinoless double-beta decay. We consider the important problem of the calculation of nuclear matrix elements of neutrinoless double-beta decay and discuss the possibility to check the results of different model calculations of the nuclear matrix elements through their comparison with the experimental data. We discuss the upper bound of the total mass of neutrinos that was obtained recently from the data of the 2dF Galaxy Redshift Survey and other cosmological data and we discuss future prospects of the cosmological measurements of the total mass of neutrinos. We discuss also the possibility to obtain information on neutrino masses from the observation of the ultra high-energy cosmic rays (beyond the GZK cutoff). Finally, we review the main aspects of the physics of core-collapse supernovae, the limits on the absolute values of neutrino masses from the observation of SN1987A neutrinos and the future prospects of supernova neutrino detection.
Fact #870: April 27, 2015 Corporate Average Fuel Economy Progress...
Office of Environmental Management (EM)
Fact 870: April 27, 2015 Corporate Average Fuel Economy Progress, 1978-2014 Fact 870: April 27, 2015 Corporate Average Fuel Economy Progress, 1978-2014 The Corporate Average Fuel...
Monache, L D; Grell, G A; McKeen, S; Wilczak, J; Pagowski, M O; Peckham, S; Stull, R; McHenry, J; McQueen, J
2006-03-20T23:59:59.000Z
Kalman filtering (KF) is used to postprocess numerical-model output to estimate systematic errors in surface ozone forecasts. It is implemented with a recursive algorithm that updates its estimate of future ozone-concentration bias by using past forecasts and observations. KF performance is tested for three types of ozone forecasts: deterministic, ensemble-averaged, and probabilistic forecasts. Eight photochemical models were run for 56 days during summer 2004 over northeastern USA and southern Canada as part of the International Consortium for Atmospheric Research on Transport and Transformation New England Air Quality (AQ) Study. The raw and KF-corrected predictions are compared with ozone measurements from the Aerometric Information Retrieval Now data set, which includes roughly 360 surface stations. The completeness of the data set allowed a thorough sensitivity test of key KF parameters. It is found that the KF improves forecasts of ozone-concentration magnitude and the ability to predict rare events, both for deterministic and ensemble-averaged forecasts. It also improves the ability to predict the daily maximum ozone concentration, and reduces the time lag between the forecast and observed maxima. For this case study, KF considerably improves the predictive skill of probabilistic forecasts of ozone concentration greater than thresholds of 10 to 50 ppbv, but it degrades it for thresholds of 70 to 90 ppbv. Moreover, KF considerably reduces probabilistic forecast bias. The significance of KF postprocessing and ensemble-averaging is that they are both effective for real-time AQ forecasting. KF reduces systematic errors, whereas ensemble-averaging reduces random errors. When combined they produce the best overall forecast.
Kernel Regression in the Presence of Correlated Errors Kernel Regression in the Presence in nonparametric regression is difficult in the presence of correlated errors. There exist a wide variety vector machines for regression. Keywords: nonparametric regression, correlated errors, bandwidth choice
Absolute Measurement Of Laminar Shear Rate Using Photon Correlation Spectroscopy
Elliot Jenner; Brian D'Urso
2015-05-11T23:59:59.000Z
An absolute measurement of the components of the shear rate tensor $\\mathcal{S}$ in a fluid can be found by measuring the photon correlation function of light scattered from particles in the fluid. Previous methods of measuring $\\mathcal{S}$ involve reading the velocity at various points and extrapolating the shear, which can be time consuming and is limited in its ability to examine small spatial scale or short time events. Previous work in Photon Correlation Spectroscopy has involved only approximate solutions, requiring free parameters to be scaled by a known case, or different cases, such as 2-D flows, but here we present a treatment that provides quantitative results directly and without calibration for full 3-D flow. We demonstrate this treatment experimentally with a cone and plate rheometer.
Absolute Values of Neutrino Masses implied by the Seesaw Mechanism
Tsujimoto, H
2005-01-01T23:59:59.000Z
It is found that the seesaw mechanism not only explain the smallness of neutrino masses but also account for the large mixing angles simultaneously, once the unification of the neutrino Dirac mass matrix with that of up-quark sector is realized. We show that provided the Majorana masses have hierarchical structure as is seen in the up-quark sector, we can reduce the information about the absolute values of neutrino masses through the data set of neutrino experiments. The results for the light neutrino masses are $m_1:m_2:m_3\\approx 1:3:17$ $(m_1\\simeq m_2:m_3\\approx 1.2:1)$ in the case of normal mass spectrum (inverted mass spectrum), and the heaviest Majorana mass turns out to be $m_3^R=1\\times 10^{15}$ GeV which just corresponds to the GUT scale.
Method and apparatus for making absolute range measurements
Earl, Dennis D. (Knoxville, TN); Allison, Stephen W. (Knoxville, TN); Cates, Michael R. (Oak Ridge, TN); Sanders, Alvin J. (Knoxville, TN)
2002-09-24T23:59:59.000Z
This invention relates to a method and apparatus for making absolute distance or ranging measurements using Fresnel diffraction. The invention employs a source of electromagnetic radiation having a known wavelength or wavelength distribution, which sends a beam of electromagnetic radiation through a screen at least partially opaque at the wavelength. The screen has an aperture sized so as to produce a Fresnel diffraction pattern. A portion of the beam travels through the aperture to a detector spaced some distance from the screen. The detector detects the central intensity of the beam as well as a set of intensities displaced from a center of the aperture. The distance from the source to the target can then be calculated based upon the known wavelength, aperture radius, and beam intensity.
Absolute properties of the eclipsing binary star IM Persei
Lacy, Claud H. Sandberg [Physics Department, University of Arkansas, Fayetteville, AR 72701 (United States); Torres, Guillermo [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Fekel, Francis C.; Muterspaugh, Matthew W. [Center of Excellence in Information Systems, Tennessee State University, Nashville, TN 37209 (United States); Southworth, John, E-mail: clacy@uark.edu, E-mail: gtorres@cfa.harvard.edu, E-mail: fekel@evans.tsuniv.edu, E-mail: matthew1@coe.tsuniv.edu, E-mail: astro.js@keele.ac.uk [Astrophysics Group, Keele University, Staffordshire, ST5 5BG (United Kingdom)
2015-01-01T23:59:59.000Z
IM Per is a detached A7 eccentric eclipsing binary star. We have obtained extensive measurements of the light curve (28,225 differential magnitude observations) and radial velocity curve (81 spectroscopic observations) which allow us to fit orbits and determine the absolute properties of the components very accurately: masses of 1.7831 ± 0.0094 and 1.7741 ± 0.0097 solar masses, and radii of 2.409 ± 0.018 and 2.366 ± 0.017 solar radii. The orbital period is 2.25422694(15) days and the eccentricity is 0.0473(26). A faint third component was detected in the analysis of the light curves, and also directly observed in the spectra. The observed rate of apsidal motion is consistent with theory (U = 151.4 ± 8.4 year). We determine a distance to the system of 566 ± 46 pc.
Upgrade of absolute extreme ultraviolet diagnostic on J-TEXT
Zhang, X. L.; Cheng, Z. F., E-mail: chengfe@hust.edu.cn; Hou, S. Y.; Zhuang, G.; Luo, J. [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)
2014-11-15T23:59:59.000Z
The absolute extreme ultraviolet (AXUV) diagnostic system is used for radiation observation on J-TEXT tokamak [J. Zhang, G. Zhuang, Z. J. Wang, Y. H. Ding, X. Q. Zhang, and Y. J. Tang, Rev. Sci. Instrum. 81, 073509 (2010)]. The upgrade of the AXUV system is aimed to improve the spatial resolution and provide a three-dimensional image on J-TEXT. The new system consists of 12 AXUV arrays (4 AXUV16ELG arrays, 8 AXUV20ELG arrays). The spatial resolution in the cross-section is 21 mm for the AXUV16ELG arrays and 17 mm for the AXUV20ELG arrays. The pre-amplifier is also upgraded for a higher signal to noise ratio. By upgrading the AXUV imaging system, a more accurate observation on the radiation information is obtained.
THE ABSOLUTE CALIBRATION OF THE EUV IMAGING SPECTROMETER ON HINODE
Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States); Ugarte-Urra, Ignacio [College of Science, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States); Landi, Enrico [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)
2014-07-01T23:59:59.000Z
We investigate the absolute calibration of the EUV Imaging Spectrometer (EIS) on Hinode by comparing EIS full-disk mosaics with irradiance observations from the EUV Variability Experiment on the Solar Dynamics Observatory. We also use extended observations of the quiet corona above the limb combined with a simple differential emission measure model to establish new effective area curves that incorporate information from the most recent atomic physics calculations. We find that changes to the EIS instrument sensitivity are a complex function of both time and wavelength. We find that the sensitivity is decaying exponentially with time and that the decay constants vary with wavelength. The EIS short wavelength channel shows significantly longer decay times than the long wavelength channel.
Conductance and absolutely continuous spectrum of 1D samples
Laurent Bruneau; Vojkan Jakši?; Yoram Last; Claude-Alain Pillet
2015-04-27T23:59:59.000Z
We characterize the absolutely continuous spectrum of the one-dimensional Schr\\"odinger operators $h=-\\Delta+v$ acting on $\\ell^2(\\mathbb{Z}_+)$ in terms of the limiting behavior of the Landauer-B\\"uttiker and Thouless conductances of the associated finite samples. The finite sample is defined by restricting $h$ to a finite interval $[1,L]\\cap\\mathbb{Z}_+$ and the conductance refers to the charge current across the sample in the open quantum system obtained by attaching independent electronic reservoirs to the sample ends. Our main result is that the conductances associated to an energy interval $I$ are non-vanishing in the limit $L\\to\\infty$ iff ${\\rm sp}_{\\rm ac}(h)\\cap I=\\emptyset$. We also discuss the relationship between this result and the Schr\\"odinger Conjecture.
The Average Mass Profile of Galaxy Clusters
R. G. Carlberg; H. K. C. Yee; E. Ellingson; S. L. Morris; R. Abraham; P. Gravel; C. J. Pritchet; T. Smecker-Hane; F. D. A. Hartwick; J. E. Hesser; J. B. Hutchings; J. B. Oke
1997-05-23T23:59:59.000Z
The average mass density profile measured in the CNOC cluster survey is well described with the analytic form rho(r)=A/[r(r+a_rho)^2], as advocated on the basis on n-body simulations by Navarro, Frenk & White. The predicted core radii are a_rho=0.20 (in units of the radius where the mean interior density is 200 times the critical density) for an Omega=0.2 open CDM model, or a_rho=0.26 for a flat Omega=0.2 model, with little dependence on other cosmological parameters for simulations normalized to the observed cluster abundance. The dynamically derived local mass-to-light ratio, which has little radial variation, converts the observed light profile to a mass profile. We find that the scale radius of the mass distribution, 0.20<= a_rho <= 0.30 (depending on modeling details, with a 95% confidence range of 0.12-0.50), is completely consistent with the predicted values. Moreover, the profiles and total masses of the clusters as individuals can be acceptably predicted from the cluster RMS line-of-sight velocity dispersion alone. This is strong support of the hierarchical clustering theory for the formation of galaxy clusters in a cool, collisionless, dark matter dominated universe.
APPENDIX A: MONTHLY AVERAGED DATA In many instances monthly averaged data are
Oregon, University of
for solar energy and climatic applications. Click on the buttons on the left to find out more about the lab for preliminary estimates of solar system performance. This section provides a summary of monthly averaged data for all sites in watt hours/meter2 per hour or day. For each site and each solar measurement the data
Van Peursem, David J.
1991-01-01T23:59:59.000Z
. The errors considered in this work are i) random errors, ii) fixed absolute systematic er- rors, and iii) fixed fractional systematic errors. As a result of this work, a model consistency test (MCT) was developed which allows the experimentalist to test...- butane at 320 K 14 5, First-order MCI' for model 1 (EF). 6. First-order MCT for model 2 (EF). 7. First-order MCT for model 3 (EF). 8. First-order MCI' for model 4 (EF). 9. First-order MCT for model 5 (EF). 10. First-order MCT for model 6 (EF). 11...
McIntyre, Justin I.; Cooper, Matthew W.; Ely, James H.; Haas, Derek A.; Schrom, Brian T.; Warren, Glen A.
2013-05-01T23:59:59.000Z
This is a conference proceedings from the MARC conference. It discusses the research conducted into an alternative method of detector calibration and absolute activity measurement.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21T23:59:59.000Z
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Absolute Values of Neutrino Masses implied by the Seesaw Mechanism
H. Tsujimoto
2005-12-12T23:59:59.000Z
It is found that the seesaw mechanism not only explain the smallness of neutrino masses but also account for the large mixing angles simultaneously, even if the unification of the neutrino Dirac mass matrix with that of up-type quark sector is realized. We show that provided the Majorana masses have hierarchical structure as is seen in the up-type quark sector and all mass matrices are real, we can reduce the information about the absolute values of neutrino masses through the data set of neutrino experiments. Especially for $\\theta_{13}=0$, we found that the neutrino masses are decided as $m_1:m_2:m_3\\approx 1:3:17$ or $1:50:250$ ($m_1\\simeq m_2:m_3\\approx 3:1$ or $12:1$) in the case of normal mass spectrum (inverted mass spectrum), and the greatest Majorana mass turns out to be $m_3^R=1\\times 10^{15}$ GeV which just corresponds to the GUT scale. Including the decoupling effects caused by three singlet neutrinos, we also perform a renormalization group analysis to fix the neutrino Yukawa coupling matrix at low energy.
Method and apparatus for making absolute range measurements
Allison, Stephen W. (Knoxville, TN); Cates, Michael R. (Oak Ridge, TN); Key, William S. (Knoxville, TN); Sanders, Alvin J. (Knoxville, TN); Earl, Dennis D. (Knoxville, TN)
1999-01-01T23:59:59.000Z
This invention relates to a method and apparatus for making absolute distance or ranging measurements using Fresnel diffraction. The invention employs a source of electromagnetic radiation having a known wavelength or wavelength distribution, which sends a beam of electromagnetic radiation through an object which causes it to be split (hereinafter referred to as a "beamsplitter"), and then to a target. The beam is reflected from the target onto a screen containing an aperture spaced a known distance from the beamsplitter. The aperture is sized so as to produce a Fresnel diffraction pattern. A portion of the beam travels through the aperture to a detector, spaced a known distance from the screen. The detector detects the central intensity of the beam. The distance from the object which causes the beam to be split to the target can then be calculated based upon the known wavelength, aperture radius, beam intensity, and distance from the detector to the screen. Several apparatus embodiments are disclosed for practicing the method embodiments of the present invention.
Method and apparatus for making absolute range measurements
Allison, S.W.; Cates, M.R.; Key, W.S.; Sanders, A.J.; Earl, D.D.
1999-06-22T23:59:59.000Z
This invention relates to a method and apparatus for making absolute distance or ranging measurements using Fresnel diffraction. The invention employs a source of electromagnetic radiation having a known wavelength or wavelength distribution, which sends a beam of electromagnetic radiation through an object which causes it to be split (hereinafter referred to as a beam splitter''), and then to a target. The beam is reflected from the target onto a screen containing an aperture spaced a known distance from the beam splitter. The aperture is sized so as to produce a Fresnel diffraction pattern. A portion of the beam travels through the aperture to a detector, spaced a known distance from the screen. The detector detects the central intensity of the beam. The distance from the object which causes the beam to be split to the target can then be calculated based upon the known wavelength, aperture radius, beam intensity, and distance from the detector to the screen. Several apparatus embodiments are disclosed for practicing the method embodiments of the present invention. 9 figs.
Precision absolute-value amplifier for a precision voltmeter
Hearn, W.E.; Rondeau, D.J.
1982-10-19T23:59:59.000Z
Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resistor is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resistor. The output current through the load resistor is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resistor. A second gain determining resistor is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.
Precision absolute value amplifier for a precision voltmeter
Hearn, William E. (Berkeley, CA); Rondeau, Donald J. (El Sobrante, CA)
1985-01-01T23:59:59.000Z
Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resister is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resister. The output current through the load resister is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resister. A second gain determining resister is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.
Absolute hydrogen determination in coal-derived heavy distillate samples
Kottenstette, R.J.; Schneider, D.A.; Loy, D.A.
1994-06-01T23:59:59.000Z
Organic elemental hydrogen analysis is routinely performed with an automated analyzer having a high temperature combustion zone that is connected to a detector which measures the response of the product water. With the advent of instrumental electronics, automated microanalysis gradually replaced the gravimetric techniques mainly because of increased analysis speed. Modern automated organic elemental analysis consists of combusting the sample in the presence of a solid oxidant and sweeping the products into a thermal conductivity of infrared detector [4,5]. An alternative technique for the detection of hydrogen is to react the product water with carbonyldiimidazole to generate a quantitative amount of carbon dioxide which is measured by a coulometric tritration [6]. The development of Proton Nuclear Magnetic Nuclear Resonance Spectroscopy has led to the description and qualitative classification of hydrogen in organic compounds. These techniques have been especially helpful in describing hydrogen as it is classified into aliphatic, aromatic and hydroaromatic groupings [1,2,3]. In addition, low resolution proton {sup 1}H-NMR has been sucessfully used to determine absolute amounts of hydrogen in a variety of petroleum fractions [7,8]. Our technique involves simple integration of high resolution {sup 1}H-NMR spectra with careful attention given to sample preparation and spectral integration.
Revised absolute amplitude calibration of the LOPES experiment
Link, K; Apel, W D; Arteaga-Velázquez, J C; Bähren, L; Bekk, K; Bertaina, M; Biermann, P L; Blümer, J; Bozdog, H; Brancus, I M; Cantoni, E; Chiavassa, A; Daumiller, K; de Souza, V; Di Pierro, F; Doll, P; Engel, R; Falcke, H; Fuchs, B; Gemmeke, H; Grupen, C; Haungs, A; Heck, D; Hiller, R; Hörandel, J R; Horneffer, A; Huber, D; Isar, P G; Kampert, K-H; Kang, D; Krömer, O; Kuijpers, J; ?uczak, P; Ludwig, M; Mathes, H J; Melissas, M; Morello, C; Oehlschläger, J; Palmieri, N; Pierog, T; Rautenberg, J; Rebel, H; Roth, M; Rühle, C; Saftoiu, A; Schieler, H; Schmidt, A; Schoo, S; Schröder, F G; Sima, O; Toma, G; Trinchero, G C; Weindl, A; Wochele, J; Zabierowski, J; Zensus, J A
2015-01-01T23:59:59.000Z
One of the main aims of the LOPES experiment was the evaluation of the absolute amplitude of the radio signal of air showers. This is of special interest since the radio technique offers the possibility for an independent and highly precise determination of the energy scale of cosmic rays on the basis of signal predictions from Monte Carlo simulations. For the calibration of the amplitude measured by LOPES we used an external source. Previous comparisons of LOPES measurements and simulations of the radio signal amplitude predicted by CoREAS revealed a discrepancy of the order of a factor of two. A re-measurement of the reference calibration source, now performed for the free field, was recently performed by the manufacturer. The updated calibration values lead to a lowering of the reconstructed electric field measured by LOPES by a factor of $2.6 \\pm 0.2$ and therefore to a significantly better agreement with CoREAS simulations. We discuss the updated calibration and its impact on the LOPES analysis results.
Revised absolute amplitude calibration of the LOPES experiment
K. Link; T. Huege; W. D. Apel; J. C. Arteaga-Velázquez; L. Bähren; K. Bekk; M. Bertaina; P. L. Biermann; J. Blümer; H. Bozdog; I. M. Brancus; E. Cantoni; A. Chiavassa; K. Daumiller; V. de Souza; F. Di Pierro; P. Doll; R. Engel; H. Falcke; B. Fuchs; H. Gemmeke; C. Grupen; A. Haungs; D. Heck; R. Hiller; J. R. Hörandel; A. Horneffer; D. Huber; P. G. Isar; K-H. Kampert; D. Kang; O. Krömer; J. Kuijpers; P. ?uczak; M. Ludwig; H. J. Mathes; M. Melissas; C. Morello; J. Oehlschläger; N. Palmieri; T. Pierog; J. Rautenberg; H. Rebel; M. Roth; C. Rühle; A. Saftoiu; H. Schieler; A. Schmidt; S. Schoo; F. G. Schröder; O. Sima; G. Toma; G. C. Trinchero; A. Weindl; J. Wochele; J. Zabierowski; J. A. Zensus
2015-08-14T23:59:59.000Z
One of the main aims of the LOPES experiment was the evaluation of the absolute amplitude of the radio signal of air showers. This is of special interest since the radio technique offers the possibility for an independent and highly precise determination of the energy scale of cosmic rays on the basis of signal predictions from Monte Carlo simulations. For the calibration of the amplitude measured by LOPES we used an external source. Previous comparisons of LOPES measurements and simulations of the radio signal amplitude predicted by CoREAS revealed a discrepancy of the order of a factor of two. A re-measurement of the reference calibration source, now performed for the free field, was recently performed by the manufacturer. The updated calibration values lead to a lowering of the reconstructed electric field measured by LOPES by a factor of $2.6 \\pm 0.2$ and therefore to a significantly better agreement with CoREAS simulations. We discuss the updated calibration and its impact on the LOPES analysis results.
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED
Kaplan, Alexey
POWER SPECTRAL PARAMETERIZATIONS OF ERROR AS A FUNCTION OF RESOLUTION IN GRIDDED ALTIMETRY MAPS be expressed in terms of the averages over model grid box areas. In reality, however, observations are either differently by the model grid and by the observational system. This difference turns out to be a major
Keeling, V; Jin, H; Ali, I; Ahmad, S [Oklahoma Univ. Health Science Ctr., Oklahoma City, OK (United States)
2014-06-01T23:59:59.000Z
Purpose: To determine dosimetric impact of positioning errors in the stereotactic hypo-fractionated treatment of intracranial lesions using 3Dtransaltional and 3D-rotational corrections (6D) frameless BrainLAB ExacTrac X-Ray system. Methods: 20 cranial lesions, treated in 3 or 5 fractions, were selected. An infrared (IR) optical positioning system was employed for initial patient setup followed by stereoscopic kV X-ray radiographs for position verification. 6D-translational and rotational shifts were determined to correct patient position. If these shifts were above tolerance (0.7 mm translational and 1° rotational), corrections were applied and another set of X-rays was taken to verify patient position. Dosimetric impact (D95, Dmin, Dmax, and Dmean of planning target volume (PTV) compared to original plans) of positioning errors for initial IR setup (XC: Xray Correction) and post-correction (XV: X-ray Verification) was determined in a treatment planning system using a method proposed by Yue et al. (Med. Phys. 33, 21-31 (2006)) with 3D-translational errors only and 6D-translational and rotational errors. Results: Absolute mean translational errors (±standard deviation) for total 92 fractions (XC/XV) were 0.79±0.88/0.19±0.15 mm (lateral), 1.66±1.71/0.18 ±0.16 mm (longitudinal), 1.95±1.18/0.15±0.14 mm (vertical) and rotational errors were 0.61±0.47/0.17±0.15° (pitch), 0.55±0.49/0.16±0.24° (roll), and 0.68±0.73/0.16±0.15° (yaw). The average changes (loss of coverage) in D95, Dmin, Dmax, and Dmean were 4.5±7.3/0.1±0.2%, 17.8±22.5/1.1±2.5%, 0.4±1.4/0.1±0.3%, and 0.9±1.7/0.0±0.1% using 6Dshifts and 3.1±5.5/0.0±0.1%, 14.2±20.3/0.8±1.7%, 0.0±1.2/0.1±0.3%, and 0.7±1.4/0.0±0.1% using 3D-translational shifts only. The setup corrections (XC-XV) improved the PTV coverage by 4.4±7.3% (D95) and 16.7±23.5% (Dmin) using 6D adjustment. Strong correlations were observed between translation errors and deviations in dose coverage for XC. Conclusion: The initial BrainLAB IR system based on rigidity of the mask-frame setup is not sufficient for accurate stereotactic positioning; however, with X-ray imageguidance sub-millimeter accuracy is achieved with negligible deviations in dose coverage. The angular corrections (mean angle summation=1.84°) are important and cause considerable deviations in dose coverage.
A simulation method for calculating the absolute entropy and free energy of fluids: Application to
Meirovitch, Hagai
A simulation method for calculating the absolute entropy and free energy of fluids: Application is a general approach for calculating the absolute entropy and free energy by analyzing Boltzmann samples and the TIP3P model of water, and very good results for the free energy are obtained, as compared with results
Paris-Sud XI, Université de
1 Bayesian modelling of an absolute chronology for Egypt's 18th Dynasty by astrophysical Egyptology, the establishment of an absolute chronology for Ancient Egypt has been an ambition which has contained lists of the kings who reigned in Egypt. The Palermo Stone, the Abydos reliefs and the Turin Canon
Free volume hypothetical scanning molecular dynamics method for the absolute free energy of liquids
Meirovitch, Hagai
Free volume hypothetical scanning molecular dynamics method for the absolute free energy of liquids for calculating the absolute entropy, S, and free energy, F, by analyzing Boltzmann samples obtained by Monte energy evaluation is a central issue in atomistic modeling.15 When the free energy is known, equilibrium
Fact #849: December 1, 2014 Midsize Hybrid Cars Averaged 51%...
Broader source: Energy.gov (indexed) [DOE]
For the 2014 model year, midsize hybrid cars averaged 43.4 miles per gallon (mpg) while midsize non-hybrid cars averaged 28.7 mpg; the difference between the two has narrowed due...
Fact #870: April 27, 2015 Corporate Average Fuel Economy Progress...
70: April 27, 2015 Corporate Average Fuel Economy Progress, 1978-2014 - Dataset Fact 870: April 27, 2015 Corporate Average Fuel Economy Progress, 1978-2014 - Dataset Excel file...
Fact #889: September 7, 2015 Average Diesel Price Lower than...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
9: September 7, 2015 Average Diesel Price Lower than Gasoline for the First Time in Six Years Fact 889: September 7, 2015 Average Diesel Price Lower than Gasoline for the First...
Averaging top quark results in Run 2 M. Strovink
Strovink, Mark
average (cont'd) The pie chart shows the relative weights of the five input measurements in the world
Strong thermal leptogenesis and the absolute neutrino mass scale
Bari, Pasquale Di; King, Sophie E.; Fiorentin, Michele Re, E-mail: pdb1d08@soton.ac.uk, E-mail: sk1806@soton.ac.uk, E-mail: m.re-fiorentin@soton.ac.uk [School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ (United Kingdom)
2014-03-01T23:59:59.000Z
We show that successful strong thermal leptogenesis, where the final asymmetry is independent of the initial conditions and in particular a large pre-existing asymmetry is efficiently washed-out, favours values of the lightest neutrino mass m{sub 1}?>10 meV for normal ordering (NO) and m{sub 1}?>3 meV for inverted ordering (IO) for models with orthogonal matrix entries respecting |?{sub ij}{sup 2}|?<2. We show analytically why lower values of m{sub 1} require a higher level of fine tuning in the seesaw formula and/or in the flavoured decay parameters (in the electronic for NO, in the muonic for IO). We also show how this constraint exists thanks to the measured values of the neutrino mixing angles and could be tightened by a future determination of the Dirac phase. Our analysis also allows us to place a more stringent constraint for a specific model or class of models, such as SO(10)-inspired models, and shows that some models cannot realise strong thermal leptogenesis for any value of m{sub 1}. A scatter plot analysis fully supports the analytical results. We also briefly discuss the interplay with absolute neutrino mass scale experiments concluding that they will be able in the coming years to either corner strong thermal leptogenesis or find positive signals pointing to a non-vanishing m{sub 1}. Since the constraint is much stronger for NO than for IO, it is very important that new data from planned neutrino oscillation experiments will be able to solve the ambiguity.
Improving climate change detection through optimal seasonal averaging: the
Wirosoetisno, Djoko
Improving climate change detection through optimal seasonal averaging: the case of the North. (2015) Improving climate change detection through optimal seasonal averaging: the case of the North;Improving climate change detection through optimal seasonal averaging:1 the case of the North Atlantic jet
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01T23:59:59.000Z
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Estimating IMU heading error from SAR images.
Doerry, Armin Walter
2009-03-01T23:59:59.000Z
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
On a fatal error in tachyonic physics
Edward Kapu?cik
2013-08-10T23:59:59.000Z
A fatal error in the famous paper on tachyons by Gerald Feinberg is pointed out. The correct expressions for energy and momentum of tachyons are derived.
Original Article Error Bounds and Metric Subregularity
2014-06-18T23:59:59.000Z
theory of error bounds of extended real-valued functions. Another objective is to ... Another observation is that neighbourhood V in the original definition of metric.
Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)
Hodge, B. M.; Milligan, M.
2011-07-01T23:59:59.000Z
This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.
Clark, E.L.
1993-08-01T23:59:59.000Z
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.
The absolute magnitude distribution of Kuiper Belt objects
Fraser, Wesley C. [Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Brown, Michael E. [Division of Geological and Planetary Sciences, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Morbidelli, Alessandro [Laboratoire Lagrange, UMR7293, Université de Nice Sophia-Antipolis, CNRS, Observatoire de la Côte d'Azur, BP 4229, F-06304 Nice (France); Parker, Alex [Department of Astronomy, University of California at Berkeley, Berkeley, CA 94720 (United States); Batygin, Konstantin, E-mail: wesley.fraser@nrc.ca [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 51, Cambridge, MA 02138 (United States)
2014-02-20T23:59:59.000Z
Here we measure the absolute magnitude distributions (H-distribution) of the dynamically excited and quiescent (hot and cold) Kuiper Belt objects (KBOs), and test if they share the same H-distribution as the Jupiter Trojans. From a compilation of all useable ecliptic surveys, we find that the KBO H-distributions are well described by broken power laws. The cold population has a bright-end slope, ?{sub 1}=1.5{sub ?0.2}{sup +0.4}, and break magnitude, H{sub B}=6.9{sub ?0.2}{sup +0.1} (r'-band). The hot population has a shallower bright-end slope of, ?{sub 1}=0.87{sub ?0.2}{sup +0.07}, and break magnitude H{sub B}=7.7{sub ?0.5}{sup +1.0}. Both populations share similar faint-end slopes of ?{sub 2} ? 0.2. We estimate the masses of the hot and cold populations are ?0.01 and ?3 × 10{sup –4} M {sub ?}. The broken power-law fit to the Trojan H-distribution has ?{sub 1} = 1.0 ± 0.2, ?{sub 2} = 0.36 ± 0.01, and H {sub B} = 8.3. The Kolmogorov-Smirnov test reveals that the probability that the Trojans and cold KBOs share the same parent H-distribution is less than 1 in 1000. When the bimodal albedo distribution of the hot objects is accounted for, there is no evidence that the H-distributions of the Trojans and hot KBOs differ. Our findings are in agreement with the predictions of the Nice model in terms of both mass and H-distribution of the hot and Trojan populations. Wide-field survey data suggest that the brightest few hot objects, with H{sub r{sup ?}}?3, do not fall on the steep power-law slope of fainter hot objects. Under the standard hierarchical model of planetesimal formation, it is difficult to account for the similar break diameters of the hot and cold populations given the low mass of the cold belt.
Error Mining on Dependency Trees Claire Gardent
Paris-Sud XI, Université de
Error Mining on Dependency Trees Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-l`es-Nancy, F-54600, France shashi.narayan@loria.fr Abstract In recent years, error mining approaches were propose an algorithm for mining trees and ap- ply it to detect the most likely sources of gen- eration
SEU induced errors observed in microprocessor systems
Asenek, V.; Underwood, C.; Oldfield, M. [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre] [Univ. of Surrey, Guildford (United Kingdom). Surrey Space Centre; Velazco, R.; Rezgui, S.; Cheynet, P. [TIMA Lab., Grenoble (France)] [TIMA Lab., Grenoble (France); Ecoffet, R. [Centre National d`Etudes Spatiales, Toulouse (France)] [Centre National d`Etudes Spatiales, Toulouse (France)
1998-12-01T23:59:59.000Z
In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.
Remarks on statistical errors in equivalent widths
Klaus Vollmann; Thomas Eversberg
2006-07-03T23:59:59.000Z
Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.
Inference for Model Error Allan Seheult
Oakley, Jeremy
Reservoirs, Model Error, Reification, Thermohaline Circulation. 1 Introduction Mathematical models of complex that the uncertainties associated with both calibrating a mathematical model to observations on a physical system specification exercise of model error with the cosmologists, linked to an extensive analysis of model
Nonparametric Regression with Correlated Errors Jean Opsomer
Wang, Yuedong
Nonparametric Regression with Correlated Errors Jean Opsomer Iowa State University Yuedong Wang Nonparametric regression techniques are often sensitive to the presence of correlation in the errors splines and wavelet regression under correlation, both for short-range and long-range dependence
Stabilizer Formalism for Operator Quantum Error Correction
Poulin, D
2005-01-01T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the language of the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure without affecting its essential properties. This opens the path to possible improvement of the error threshold of fault tolerant quantum computing. We also derive a modified Hamming bound that applies to all stabilizer codes, including degenerate ones.
Stabilizer Formalism for Operator Quantum Error Correction
David Poulin
2006-06-14T23:59:59.000Z
Operator quantum error correction is a recently developed theory that provides a generalized framework for active error correction and passive error avoiding schemes. In this paper, we describe these codes in the stabilizer formalism of standard quantum error correction theory. This is achieved by adding a "gauge" group to the standard stabilizer definition of a code that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 4 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
Paris-Sud XI, Université de
Absolute frequency measurement of an SF6 two-photon line using a femtosecond optical comb and sum laser. The absolute frequency of a CO2 laser stabilized onto an SF6 two-photon line has been measured
Orbit-averaged guiding-center Fokker-Planck operator
Brizard, A. J. [Department of Chemistry and Physics, Saint Michael's College, Colchester, Vermont 05439 (United States); Decker, J.; Peysson, Y.; Duthoit, F.-X. [CEA, IRFM, Saint-Paul-lez-Durance F-13108 (France)
2009-10-15T23:59:59.000Z
A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant {psi}, the minimum-B pitch-angle coordinate {xi}{sub 0}, and the momentum magnitude p.
Paris-Sud XI, Université de
Primary crossflow vortices, secondary absolute instabilities and their control in the rotating patterns of crossflow vortices are derived by employing asymptotic techniques. This approach accounts three-dimensional velocity profiles, are subject to inviscid crossflow in- stabilities and rapidly
An inequality for the trace of matrix products, using absolute values
Bernhard Baumgartner
2011-09-01T23:59:59.000Z
The absolute value of matrices is used in order to give inequalities for the trace of products. An application gives a very short proof of the tracial matrix Hoelder inequality
Absolute Measure of Local Chirality and the Chiral Polarization Scale of the QCD Vacuum
Andrei Alexandru; Terrence Draper; Ivan Horváth; Thomas Streuer
2010-10-26T23:59:59.000Z
The use of the absolute measure of local chirality is championed since it has a uniform distribution for randomly reshuffled chiral components so that any deviations from uniformity in the associated "X-distribution" are directly attributable to QCD-induced dynamics. We observe a transition in the qualitative behavior of this absolute X-distribution of low-lying eigenmodes which, we propose, defines a chiral polarization scale of the QCD vacuum.
Irving A. Kelter
2003-01-01T23:59:59.000Z
. Westport, CT: Greenwood Press, 2002. xxxiv + 450 pp. $99.95. Review by IRVING A. KELTER, UNIVERSITY OF ST. THOMAS, HOUSTON. Absolutism and the Scientific Revolution 1600-1720: A Biographi- cal Dictionary, edited by Christopher Baker, is one... and wide- ranging assessment of the interplay between the state, society, and medical institutions throughout Britain during the past four cen- turies. Christopher Baker, ed. Absolutism and the Scientific Revolution 1600- 1720: A Biographical Dictionary...
Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling
Louisiana State University; Balman, Mehmet; Kosar, Tevfik
2010-10-27T23:59:59.000Z
Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.
Average balance equations, scale dependence, and energy cascade for granular materials
Riccardo Artoni; Patrick Richard
2015-03-09T23:59:59.000Z
A new averaging method linking discrete to continuum variables of granular materials is developed and used to derive average balance equations. Its novelty lies in the choice of the decomposition between mean values and fluctuations of properties which takes into account the effect of gradients. Thanks to a local homogeneity hypothesis, whose validity is discussed, simplified balance equations are obtained. This original approach solves the problem of dependence of some variables on the size of the averaging domain obtained in previous approaches which can lead to huge relative errors (several hundred percentages). It also clearly separates affine and nonaffine fields in the balance equations. The resulting energy cascade picture is discussed, with a particular focus on unidirectional steady and fully developed flows for which it appears that the contact terms are dissipated locally unlike the kinetic terms which contribute to a nonlocal balance. Application of the method is demonstrated in the determination of the macroscopic properties such as volume fraction, velocity, stress, and energy of a simple shear flow, where the discrete results are generated by means of discrete particle simulation.
Distributed Average Consensus in Sensor Networks with Random Link Failures
Moura, José
Distributed Average Consensus in Sensor Networks with Random Link Failures Soummya Kar Department: soummyakgandrew.cmu.edu Abstract We study the impact of the topology of a sensor network on distributed average in terms of a moment of the distribution of the norm of a function of the network graph Laplacian matrix L
THE AVERAGED CONTROL SYSTEM OF FAST OSCILLATING CONTROL SYSTEMS
Paris-Sud XI, Université de
, control systems, small control, optimal control, Finsler geometry. AMS subject classifications. 34C29, 34H used for design. The use of averaging in optimal control of oscillating systems [10, 13, 14, 7THE AVERAGED CONTROL SYSTEM OF FAST OSCILLATING CONTROL SYSTEMS ALEX BOMBRUN AND JEAN
Quantum error-correcting codes and devices
Gottesman, Daniel (Los Alamos, NM)
2000-10-03T23:59:59.000Z
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Organizational Errors: Directions for Future Research
Carroll, John Stephen
The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...
Errors and paradoxes in quantum mechanics
D. Rohrlich
2007-08-28T23:59:59.000Z
Errors and paradoxes in quantum mechanics, entry in the Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy, ed. F. Weinert, K. Hentschel, D. Greenberger and B. Falkenburg (Springer), to appear
Simulating Bosonic Baths with Error Bars
Mischa P. Woods; M. Cramer; M. B. Plenio
2015-04-07T23:59:59.000Z
We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths the truncation of both, the number of modes and the local Hilbert-space dimensions is necessary. We derive super-exponential Lieb--Robinson-type bounds on the error when restricting the bath to finitely-many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.
Agility metric sensitivity using linear error theory
Smith, David Matthew
2000-01-01T23:59:59.000Z
Aircraft agility metrics have been proposed for use to measure the performance and capability of aircraft onboard while in-flight. The sensitivity of these metrics to various types of errors and uncertainties is not ...
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian
2012-05-01T23:59:59.000Z
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
Quantifying truncation errors in effective field theory
R. J. Furnstahl; N. Klco; D. R. Phillips; S. Wesolowski
2015-06-03T23:59:59.000Z
Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Mei{\\ss}ner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered, and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.
Hess-Flores, M
2011-11-10T23:59:59.000Z
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.
Shared dosimetry error in epidemiological dose-response analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23T23:59:59.000Z
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore »up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
Hamlen, Kevin W.
Investigating SANS/CWE Top 25 Programming Errors. 1 Investigating the SANS/CWE Top 25 Programming Errors List Running Title: Investigating SANS/CWE Top 25 Programming Errors. Investigating the SANS;Investigating SANS/CWE Top 25 Programming Errors. 2 Investigating the SANS/CWE Top 25 Programming Errors List
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Waugh, C. J. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Rosenberg, M. J. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Zylstra, A. B. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Frenje, J. A. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Seguin, F. H. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Petrasso, R. D. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Glebov, V. Yu. [Lab. for Laser Energetics, Rochester, NY (United States); Sangster, T. C. [Lab. for Laser Energetics, Rochester, NY (United States); Stoeckl, C. [Lab. for Laser Energetics, Rochester, NY (United States)
2015-05-01T23:59:59.000Z
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.
2015-05-01T23:59:59.000Z
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more »comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan [Department of Civil and Structural Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)
2014-09-12T23:59:59.000Z
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Error Detection and Recovery for Robot Motion Planning with Uncertainty
Donald, Bruce Randall
1987-07-01T23:59:59.000Z
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has ...
A systems approach to reducing utility billing errors
Ogura, Nori
2013-01-01T23:59:59.000Z
Many methods for analyzing the possibility of errors are practiced by organizations who are concerned about safety and error prevention. However, in situations where the error occurrence is random and difficult to track, ...
INDIVIDUAL REFORM ELEMENTS .63Average course exam score
Colorado at Boulder, University of
INDIVIDUAL REFORM ELEMENTS .63Average course exam score .11In class clicker score .02Lecture: · Correlations with effort/curricular elements are positive but not high, indicating no individual course reform
Does anyone have access to 2012 average residential rates by...
Does anyone have access to 2012 average residential rates by utility company? I'm seeing an inconsistency between the OpenEI website and EIA 861 data set. Home > Groups > Utility...
STATE OF CALIFORNIA AREA WEIGHTED AVERAGE CALCULATION WORKSHEET: RESIDENTIAL
of a building feature, material, or construction assembly occur in a building, a weighted average there is more than one level of floor, wall, or ceiling insulation in a building, or more than one type
Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging
Raftery, Adrian
Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging J. Mc postprocessing method that creates calibrated predictive probability density functions (PDFs). Probabilistic wind extend BMA to wind speed, taking account of these challenges. This method provides calibrated and sharp
Fact #835: August 25, 2014 Average Annual Gasoline Pump Price...
Broader source: Energy.gov (indexed) [DOE]
Excel file with dataset for Fact 835: Average Annual Gasoline Pump Price, 1929-2013 fotw835web.xlsx More Documents & Publications Offshore Wind Market and Economic Analysis...
On the Choice of Average Solar Zenith Angle
Cronin, Timothy W.
Idealized climate modeling studies often choose to neglect spatiotemporal variations in solar radiation, but doing so comes with an important decision about how to average solar radiation in space and time. Since both ...
Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.
Global Error bounds for systems of convex polynomials over ...
2011-11-11T23:59:59.000Z
This paper is devoted to study the Lipschitzian/Holderian type global error ...... set is not neccessarily compact, we obtain the Hölder global error bound result.
Running jobs error: "inet_arp_address_lookup"
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
jobs error: "inetarpaddresslookup" Resolved: Running jobs error: "inetarpaddresslookup" September 22, 2013 by Helen He (0 Comments) Symptom: After the Hopper August 14...
Neutron multiplication error in TRU waste measurements
Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP
2009-01-01T23:59:59.000Z
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are more realistic and accurate. To do so, measurements of standards and waste drums were performed with High Efficiency Neutron Counters (HENC) located at Los Alamos National Laboratory (LANL). The data were analyzed for multiplication effects and new estimates of the multiplication error were computed. A concluding section will present alternatives for reducing the number of rejections of TRU waste containers due to neutron multiplication error.
STANDARDIZING TYPE Ia SUPERNOVA ABSOLUTE MAGNITUDES USING GAUSSIAN PROCESS DATA REGRESSION
Kim, A. G.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Nordin, J. [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States)] [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Thomas, R. C. [Computational Cosmology Center, Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road MS 50B-4206, Berkeley, CA 94720 (United States)] [Computational Cosmology Center, Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road MS 50B-4206, Berkeley, CA 94720 (United States); Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J. [Laboratoire de Physique Nucleaire et des Hautes Energies, Universite Pierre et Marie Curie Paris 6, Universite Denis Diderot Paris 7, CNRS-IN2P3, 4 place Jussieu, F-75252 Paris Cedex 05 (France)] [Laboratoire de Physique Nucleaire et des Hautes Energies, Universite Pierre et Marie Curie Paris 6, Universite Denis Diderot Paris 7, CNRS-IN2P3, 4 place Jussieu, F-75252 Paris Cedex 05 (France); Baltay, C. [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States)] [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States); Buton, C.; Kerschhaggl, M.; Kowalski, M. [Physikalisches Institut, Universitaet Bonn, Nussallee 12, D-53115 Bonn (Germany)] [Physikalisches Institut, Universitaet Bonn, Nussallee 12, D-53115 Bonn (Germany); Chotard, N. [Tsinghua Center for Astrophysics, Tsinghua University, Beijing 100084 (China)] [Tsinghua Center for Astrophysics, Tsinghua University, Beijing 100084 (China); Copin, Y.; Gangler, E. [Universite de Lyon, F-69622 Lyon (France)] [Universite de Lyon, F-69622 Lyon (France); and others
2013-04-01T23:59:59.000Z
We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SEDs) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak B brightness are calibrated to 0.13 mag in the g band and to as low as 0.09 mag in the z = 0.25 blueshifted i band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.
Optimal error estimates for corrected trapezoidal rules
Talvila, Erik
2012-01-01T23:59:59.000Z
Corrected trapezoidal rules are proved for $\\int_a^b f(x)\\,dx$ under the assumption that $f"\\in L^p([a,b])$ for some $1\\leq p\\leq\\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
Mather, Mara
Running head: STEREOTYPE THREAT REDUCES MEMORY ERRORS Stereotype threat can reduce older adults, 90089-0191. Phone: 213-740-6772. Email: barbersa@usc.edu #12;STEREOTYPE THREAT REDUCES MEMORY ERRORS 2 Abstract (144 words) Stereotype threat often incurs the cost of reducing the amount of information
Exploring the Saturation Levels of Stimulated Raman Scattering in the Absolute Regime
Michel, D. T. [LULI, UMR 7605 CNRS-Ecole Polytechnique-CEA-Universite Paris VI, 91128 Palaiseau cedex (France); CEA DAM DIF, F- 91297 Arpajon (France); Depierreux, S.; Tassin, V. [CEA DAM DIF, F- 91297 Arpajon (France); Stenz, C. [CELIA, Universite Bordeaux 1, 351 cours de la Liberation, 33405 Talence cedex (France); Labaune, C. [LULI, UMR 7605 CNRS-Ecole Polytechnique-CEA-Universite Paris VI, 91128 Palaiseau cedex (France)
2010-06-25T23:59:59.000Z
This Letter reports new experimental results that evidence the transition between the absolute and convective growth of stimulated Raman scattering (SRS). Significant reflectivities were observed only when the instability grows in the absolute regime. In this case, saturation processes efficiently limit the SRS reflectivity that is shown to scale linearly with the laser intensity, and the electron density and temperature. Such a scaling agrees with the one established by T. Kolber et al.[Phys. Fluids B 5, 138 (1993)] and B Bezzerides et al.[Phys. Rev. Lett. 70, 2569 (1993)], from numerical simulations where the Raman saturation is due to the coupling of electron plasma waves with ion waves dynamics.
Absolute continuity and singularity of Palm measures of the Ginibre point process
Hirofumi Osada; Tomoyuki Shirai
2015-04-05T23:59:59.000Z
We prove a dichotomy between absolute continuity and singularity of the Ginibre point process $\\mathsf{G}$ and its reduced Palm measures $\\{\\mathsf{G}_{\\mathbf{x}}, \\mathbf{x} \\in \\mathbb{C}^{\\ell}, \\ell = 0,1,2\\dots\\}$, namely, reduced Palm measures $\\G_{\\mathbf{x}}$ and $\\G_{\\mathbf{y}}$ for $\\mathbf{x} \\in \\mathbb{C}^{\\ell}$ and $\\mathbf{y} \\in \\mathbb{C}^{n}$ are mutually absolutely continuous if and only if $\\ell = n$; they are singular each other if and only if $\\ell \
On the Error in QR Integration
Dieci, Luca; Van Vleck, Erik
2008-03-07T23:59:59.000Z
] . . . [R(t2, t1) +E2][R(t1, t0) +E1]R(t0) , k = 1, 2, . . . , where Q(tk) is the exact Q-factor at tk and the triangular transitions R(tj , tj?1) are also the exact ones. Moreover, the factors Ej , j = 1, . . . , k, are bounded in norm by the local error... committed during integration of the relevant differential equations; see Theorems 3.1 and 3.16.” We will henceforth simply write (2.7) ?Ej? ? ?, j = 1, 2, . . . , and stress that ? is computable, in fact controllable, in terms of local error tolerances...
Recent experiences with error estimation and adaptivity
Haque, Khalid Ansar
1991-01-01T23:59:59.000Z
RECENT EXPERIENCES WITH ERROR ESTIMATION AND ADAPTIVITY A Thesis by K HA LID ANSA R I I A & )UE Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE December 1991 Major Subject: Aerospace Engineering RECENT EXPERIENCES WITH ERROR ESTIMATION AND ADAPTIVITY A Thesis by KHALID ANSAR HAQUE Approved as to style and content by: W b4 f. ou Lou (i s T. Strouboulis (Chair of Committee) W. E...
Laser Phase Errors in Seeded FELs
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-03-28T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
A VaR Black-Litterman Model for the Construction of Absolute ...
2009-06-02T23:59:59.000Z
rithmic technique is very efficient, outperforming, in terms of both speed and ..... It can be seen that the error term vector ? does not directly enter the Black-
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01T23:59:59.000Z
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
High average power scaleable thin-disk laser
Beach, Raymond J. (Livermore, CA); Honea, Eric C. (Sunol, CA); Bibeau, Camille (Dublin, CA); Payne, Stephen A. (Castro Valley, CA); Powell, Howard (Livermore, CA); Krupke, William F. (Pleasanton, CA); Sutton, Steven B. (Manteca, CA)
2002-01-01T23:59:59.000Z
Using a thin disk laser gain element with an undoped cap layer enables the scaling of lasers to extremely high average output power values. Ordinarily, the power scaling of such thin disk lasers is limited by the deleterious effects of amplified spontaneous emission. By using an undoped cap layer diffusion bonded to the thin disk, the onset of amplified spontaneous emission does not occur as readily as if no cap layer is used, and much larger transverse thin disks can be effectively used as laser gain elements. This invention can be used as a high average power laser for material processing applications as well as for weapon and air defense applications.
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28T23:59:59.000Z
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
High Performance Dense Linear System Solver with Soft Error Resilience
Dongarra, Jack
High Performance Dense Linear System Solver with Soft Error Resilience Peng Du, Piotr Luszczek systems, and in some scientific applications C/R is not applicable for soft error at all due to error) high performance dense linear system solver with soft error resilience. By adopting a mathematical
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01T23:59:59.000Z
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
Lateral boundary errors in regional numerical weather
?umer, Slobodan
Lateral boundary errors in regional numerical weather prediction models Author: Ana Car Advisor weather services for short- range forecasts. These models are covering smaller areas with higher reso Introduction Equations for numerical weather prediction (NWP) are mathematical represen- ation of physical
MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS
Heinemann, Detlev
commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head
Aizenman, Michael [Departments of Physics and Mathematics, Princeton University, Princeton, New Jersey 08544 (United States); Warzel, Simone [Zentrum Mathematik, TU Munich, Boltzmannstr. 3, 85747 Garching (Germany)
2012-09-15T23:59:59.000Z
We discuss the dynamical implications of the recent proof that for a quantum particle in a random potential on a regular tree graph absolutely continuous (ac) spectrum occurs non-perturbatively through rare fluctuation-enabled resonances. The main result is spelled in the title.
Are ceramics and bricks reliable absolute geomagnetic intensity carriers? Juan Morales a,
Cattin, Rodolphe
Are ceramics and bricks reliable absolute geomagnetic intensity carriers? Juan Morales a, , Avto performed on the raw material (clay and paste) and on in situ prepared baked ceramics and bricks included indicate a mixture of multi- domain and a significant amount of single-domain grains. Ceramic pieces
Absolute H Emission Measurement System for the Maryland Centrifugal eXperiment
Anlage, Steven
Absolute H Emission Measurement System for the Maryland Centrifugal eXperiment Ryan Clary April 22 developed and implemented at the Maryland Centrifugal eXperiment (MCX). The primary goal of this system Introduction The Maryland Centrifugal eXperiment (MCX) is a rotating-plasma mirror machine. The purpose
Measurement of the absolute branching fraction for D(0) -> K- pi+
Ammar, Raymond G.; Ball, S.; Baringer, Philip S.; Coppage, Don; Copty, N.; Davis, Robin E. P.; Hancock, N.; Kelly, M.; Kwak, Nowhan; Lam, H.
1993-11-01T23:59:59.000Z
Using 1.79 fb-1 of data recorded by the CLEO II detector we have measured the absolute branching fraction for D0 --> K-pi+. The angular correlation between the pi+ emitted in the decay D*+ --> D0pi+, and the jet direction in e+e- --> ccBAR events...
Evans, J., E-mail: radiant@ferrodevices.com; Chapman, S., E-mail: radiant@ferrodevices.com [Radiant Technologies, Inc., 2835C Pan American Fwy NE, Albuquerque, New Mexico 87107 (United States)
2014-08-14T23:59:59.000Z
Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ĺngstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude of the electrical signal from the AFM at the Ĺngstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided.
Absolute Instability of a Liquid Jet in a Coflowing Stream Andrew S. Utada,1
Absolute Instability of a Liquid Jet in a Coflowing Stream Andrew S. Utada,1 Alberto Fernandez. In this Letter we report the observation of jets in a coflowing stream that break up into drops due to an abso 13 July 2007; published 11 January 2008) Cylindrical liquid jets are inherently unstable
The absolute and relative de Rham-Witt complexes Lars Hesselholt
-schemes to * *Z(p)-schemes. From this comparison, we derive a Gauss-Manin connection on the crystalline. There is a canonical surjective map Wn .X! Wn .X=S from the absolute de Rham of the canonical map f-1Wn* * 1S! Wn 1X. The graded pieces for the I-adic filtration are differential graded
DIGITALVISION ltra-wideband (UWB) radios have relative bandwidths larger than 20% or absolute
Giannakis, Georgios
.S. Federal Communications Commission (FCC) allowed the use of unlicensed UWB communications [8]. The first bandwidths of more than 500 MHz. Such wide bandwidths offer a wealth of advan- tages for both communications ranging accuracy. For communications, both large relative and large absolute band- width alleviate small
A rapid multiple-sample approach to the determination of absolute paleointensity
Utrecht, Universiteit
an alternative approach to absolute paleointensity determination, one which involves exactly five heatings involves the simultaneous thermal treatment of several subspecimens sampled from different regions throughout the igneous rock unit under investigation. For inclusion of data in a given determination, self
Mandelis, Andreas
Diagnostics Laboratories (PODL), University of Toronto, 5 King's College Road, Toronto, Ontario M5S 3G8 boundary layer adjacent to the cavity thermal source a metallic CrNi alloy strip . This resulted cavity lengths allowed the measurement of the absolute infrared emissivity of the thin CrNi strip source
End-to-end absolute energy calibration of atmospheric fluorescence telescopes by an electron linear of fluorescence telescopes by using air showers induced by electron beams from a linear accelerator, which and constructing a compact linear accelerator with a maximum electron energy of 40 MeV and an intensity of 6.4 m
Paleosecular variation and the average geomagnetic field at 20 latitude
Johnson, Catherine Louise
-averaged field (TAF) for a two-parameter longitudinally symmetric (zonal) model. Values for our model parameters rocks, and oceanic sediments, but consistent with that from reversed polarity continental and igneous to paleosecular variation (PSV). We examine PSV at ±20° using virtual geomagnetic pole (VGP) dispersion
Optimal Control with Weighted Average Costs and Temporal Logic Specifications
Murray, Richard M.
Optimal Control with Weighted Average Costs and Temporal Logic Specifications Eric M. Wolff Control and Dynamical Systems California Institute of Technology Pasadena, California 91125 Email: ewolff@caltech.edu Ufuk Topcu Control and Dynamical Systems California Institute of Technology Pasadena, California 91125
Navy Estimated Average Hourly Load Profile by Month (in MW)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Navy Estimated Average Hourly Load Profile by Month (in MW) MONTH HE1 HE2 HE3 HE4 HE5 HE6 HE7 HE8 HE9 HE10 HE11 HE12 HE13 HE14 HE15 HE16 HE17 HE18 HE19 HE20 HE21 HE22 HE23 HE24...
Probabilistic Wind Vector Forecasting Using Ensembles and Bayesian Model Averaging
Raftery, Adrian
Probabilistic Wind Vector Forecasting Using Ensembles and Bayesian Model Averaging J. MCLEAN 2011, in final form 26 May 2012) ABSTRACT Probabilistic forecasts of wind vectors are becoming critical as interest grows in wind as a clean and re- newable source of energy, in addition to a wide range of other
The High Average Power Laser Program 15th HAPL meeting
1 The High Average Power Laser Program 15th HAPL meeting Aug 8 & 9, 2006 General Atomics Scientific Inst 16. Optiswitch Technology 17. ESLI Electricity Generator Electricity Generator Reaction (i.e. 5 Hz) "First Light" on Electra Pre-Amplifier (input to main amplifier) 23 J laser output #12
Probabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging
Washington at Seattle, University of
February 24, 2006 1J. McLean Sloughter is Graduate Research Assistant, Adrian E. Raftery is BlumsteinProbabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging J. McLean Sloughter, Adrian E. Raftery and Tilmann Gneiting 1 Department of Statistics, University of Washington
The Scientist : Surpassing the Law of Averages The Scientist
Heller, Eric
/8/2009 7:02:24 PM] #12;The Scientist : Surpassing the Law of Averages "Single-cell genomics appears to be the most straightforward, and at the moment the only way we can assemble the genomes of the uncultured and pushing technological limitations to bring their studies of genomics, genetics, RNA transcription
Fact #693: September 19, 2011 Average Vehicle Footprint for Cars...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
below. Supporting Information Average Vehicle Footprint, 2008-2010 Model Year Car Light Truck All Light Vehicles 2008 45.4 53.0 49.0 2009 45.2 52.7 48.2 2010 45.2 54.0 48.8...
Makarenkov, Vladimir
- mentaldatarequiresan efficientautomaticroutinefor theselection of hits. Unfortunately, random and systematic errors can
An Analysis of Air Passenger Average Trip Lengths and Fare Levels in US Domestic Markets
Huang, Sheng-Chen Alex
2000-01-01T23:59:59.000Z
California at Berkeley An Analysis of Air Passenger AverageCalifornia at Berkeley An Analysis of Air Passenger Average
Quantum Latin squares and unitary error bases
Benjamin Musto; Jamie Vicary
2015-04-10T23:59:59.000Z
In this paper we introduce quantum Latin squares, combinatorial quantum objects which generalize classical Latin squares, and investigate their applications in quantum computer science. Our main results are on applications to unitary error bases (UEBs), basic structures in quantum information which lie at the heart of procedures such as teleportation, dense coding and error correction. We present a new method for constructing a UEB from a quantum Latin square equipped with extra data. Developing construction techniques for UEBs has been a major activity in quantum computation, with three primary methods proposed: shift-and-multiply, Hadamard, and algebraic. We show that our new approach simultaneously generalizes the shift-and-multiply and Hadamard methods. Furthermore, we explicitly construct a UEB using our technique which we prove cannot be obtained from any of these existing methods.
Duffy, Thomas S.
Mineral Physics Institute, Stony Brook University, Stony Brook, New York 11794, USA 2 DepartmentAbsolute x-ray energy calibration over a wide energy range using a diffraction-based iterative;REVIEW OF SCIENTIFIC INSTRUMENTS 83, 063901 (2012) Absolute x-ray energy calibration over a wide energy
Meirovitch, Hagai
Absolute entropy and free energy of fluids using the hypothetical scanning method. I. Calculation the absolute entropy and free energy from a Boltzmann sample generated by Monte Carlo, molecular dynamics for the free energy. We demonstrate that very good results for the entropy and the free energy can be obtained
Gross error detection in process data
Singh, Gurmeet
1992-01-01T23:59:59.000Z
, 1991), with many optimum properties, seems to have been untapped by chemical engineers. We first review the background of the Tr test, and present relevant properties of the test. IV. A Hotelling's Generalization of Students t Test One of the most...: Chemical Engineering GROSS ERROR DETECTION IN PROCESS DATA A Thesis by GURMEET SINGH Approved as to style and content by: Ralph E. White (Chair of Committee) Michael Nikoloau (Member Richard B. Gri n (Member) R. W. Flummerfelt (Head...
Average Interpolating Wavelets on Point Clouds and Graphs
Rustamov, Raif M
2011-01-01T23:59:59.000Z
We introduce a new wavelet transform suitable for analyzing functions on point clouds and graphs. Our construction is based on a generalization of the average interpolating refinement scheme of Donoho. The most important ingredient of the original scheme that needs to be altered is the choice of the interpolant. Here, we define the interpolant as the minimizer of a smoothness functional, namely a generalization of the Laplacian energy, subject to the averaging constraints. In the continuous setting, we derive a formula for the optimal solution in terms of the poly-harmonic Green's function. The form of this solution is used to motivate our construction in the setting of graphs and point clouds. We highlight the empirical convergence of our refinement scheme and the potential applications of the resulting wavelet transform through experiments on a number of data stets.
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blanchard, Sean P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Debardeleben, Nathan A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-25T23:59:59.000Z
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
Averaging cross section data so we can fit it
Brown, D. [Brookhaven National Lab. (BNL), Upton, NY (United States). NNDC
2014-10-23T23:59:59.000Z
The ^{56}Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).
Better than Average? - Green Building Certification in International Projects
Baumann, O.
2008-01-01T23:59:59.000Z
of operational concerns that are comprised in various Green Building Certification Systems, including the overall Energy Efficiency, Water Efficiency, Commissioning, Measurement & Verification, Training, Long-Term Monitoring, etc. It will be discussed how.... An Enterprise of the Ebert-Consulting Group 1004 Pennsylvania Avenue, SE Washington, D.C. 20003, USA 00 12 02/ 6 08 - 13 34 o.baumann@eb-engineers.com Better than Average? - Green Building Certification in International Projects Green Building...
Determination of the uncertainty in assembly average burnup
Cacciapouti, R.J.; Lam, G.M.; Theriault, P.A.; Delmolino, P.M.
1998-12-31T23:59:59.000Z
Pressurized water reactors maintain records of the assembly average burnup for each fuel assembly at the plant. The reactor records are currently used by commercial reactor operators and vendors for (a) special nuclear accountability, (b) placement of spent fuel in storage pools, and (c) dry storage cask design and analysis. A burnup credit methodology has been submitted to the US Nuclear Regulatory Commission (NRC) by the US Department of Energy. In order to support this application, utilities are requested to provide burnup uncertainty as part of their reactor records. The collected burnup data are used for the development of a plant correction to the cask vendor supplied burnup credit loading curve. The objective of this work is to identify a feasible methodology for determining the 95/95 uncertainty in the assembly average burnup. Reactor records are based on the core neutronic analysis coupled with measured in-core detector data. The uncertainty of particular burnup records depends mainly on the uncertainty associated with the methods used to develop the records. The methodology adopted for this analysis utilizes current neutronic codes for the determination of the uncertainty in assembly average burnup.
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13T23:59:59.000Z
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
Averaged equilibrium and stability in low-aspect-ratio stellarators
Garcia, L.; Carreras, B.A.; Dominguez, N.
1989-01-01T23:59:59.000Z
The MHD equilibrium and stability calculations or stellarators are complex because of the intrinsic three-dimensional (3-D) character of these configurations. The stellarators expansion simplifies the equilibrium calculation by reducing it to a two-dimensional (2-D) problem. The classical stellarator expansion includes terms up to order epsilon/sup 2/, and the vacuum magnetic field is also included up to this order. For large-aspect-ratio configurations, the results of the stellarator expansion agree well with 3-D numerical equilibrium results. But for low-aspect-ratio configurations, these are significant discrepancies with 3-D equilibrium calculations. The main reason for these discrepancies is the approximation in the vacuum field contributions. This problem can be avoided by applying the average method in a vacuum flux coordinate system. In this way, the exact vacuum magnetic field contribution is included and the results agree well with 3-D equilibrium calculations even for low-aspect-ratio configurations. Using the average method in a vacuum flux coordinate system also permit the accurate calculation of local stability properties with the Mercier criterion. The main improvement is in the accurate calculation of the geodesic curvature term. In this paper, we discuss the application of the average method in flux coordinates to the calculation of the Mercier criterion for low-aspect-ratio stellarator configurations. 12 refs., 3 figs.
Message passing in fault tolerant quantum error correction
Z. W. E. Evans; A. M. Stephens
2008-06-13T23:59:59.000Z
Inspired by Knill's scheme for message passing error detection, here we develop a scheme for message passing error correction for the nine-qubit Bacon-Shor code. We show that for two levels of concatenated error correction, where classical information obtained at the first level is used to help interpret the syndrome at the second level, our scheme will correct all cases with four physical errors. This results in a reduction of the logical failure rate relative to conventional error correction by a factor proportional to the reciprocal of the physical error rate.
The Roland De Witte 1991 Detection of Absolute Motion and Gravitational Waves
Cahill, R T
2006-01-01T23:59:59.000Z
In 1991 Roland De Witte carried out an experiment in Brussels in which variations in the one-way speed of RF waves through a coaxial cable were recorded over 178 days. The data from this experiment shows that De Witte had detected absolute motion of the earth through space, as had six earlier experiments, beginning with the Michelson-Morley experiment of 1887. His results are in excellent agreement with the extensive data from the Miller 1925/26 detection of absolute motion using a gas-mode Michelson interferometer atop Mt.Wilson, California. The De Witte data reveals turbulence in the flow which amounted to the detection of gravitational waves. Similar effects were also seen by Miller, and by Torr and Kolen in their coaxial cable experiment. Here we bring together what is known about the De Witte experiment.
The Roland De Witte 1991 Detection of Absolute Motion and Gravitational Waves
Reginald T Cahill
2006-08-21T23:59:59.000Z
In 1991 Roland De Witte carried out an experiment in Brussels in which variations in the one-way speed of RF waves through a coaxial cable were recorded over 178 days. The data from this experiment shows that De Witte had detected absolute motion of the earth through space, as had six earlier experiments, beginning with the Michelson-Morley experiment of 1887. His results are in excellent agreement with the extensive data from the Miller 1925/26 detection of absolute motion using a gas-mode Michelson interferometer atop Mt.Wilson, California. The De Witte data reveals turbulence in the flow which amounted to the detection of gravitational waves. Similar effects were also seen by Miller, and by Torr and Kolen in their coaxial cable experiment. Here we bring together what is known about the De Witte experiment.
Wolff, Wania, E-mail: wania@if.ufrj.br; Luna, Hugo; Sigaud, Lucas; Montenegro, Eduardo C. [Instituto de Física, Universidade Federal do Rio de Janeiro, PO 68528, 21941-972 Rio de Janeiro, RJ (Brazil)] [Instituto de Física, Universidade Federal do Rio de Janeiro, PO 68528, 21941-972 Rio de Janeiro, RJ (Brazil); Tavares, Andre C. [Departamento de Física, Pontificia Universidade Católica do Rio de Janeiro, PO 38071, Rua Marquęs de Săo Vicente 225, 22453-900 Rio de Janeiro, RJ (Brazil)] [Departamento de Física, Pontificia Universidade Católica do Rio de Janeiro, PO 38071, Rua Marquęs de Săo Vicente 225, 22453-900 Rio de Janeiro, RJ (Brazil)
2014-02-14T23:59:59.000Z
Absolute total non-dissociative and partial dissociative cross sections of pyrimidine were measured for electron impact energies ranging from 70 to 400 eV and for proton impact energies from 125 up to 2500 keV. MOs ionization induced by coulomb interaction were studied by measuring both ionization and partial dissociative cross sections through time of flight mass spectrometry and by obtaining the branching ratios for fragment formation via a model calculation based on the Born approximation. The partial yields and the absolute cross sections measured as a function of the energy combined with the model calculation proved to be a useful tool to determine the vacancy population of the valence MOs from which several sets of fragment ions are produced. It was also a key point to distinguish the dissociation regimes induced by both particles. A comparison with previous experimental results is also presented.
Absolute calibration of a charge-coupled device camera with twin beams
Meda, A.; Ruo-Berchera, I., E-mail: i.ruoberchera@inrim.it; Degiovanni, I. P.; Brida, G.; Rastello, M. L.; Genovese, M. [Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, 10135 Torino (Italy)
2014-09-08T23:59:59.000Z
We report on the absolute calibration of a Charge-Coupled Device (CCD) camera by exploiting quantum correlation. This method exploits a certain number of spatial pairwise quantum correlated modes produced by spontaneous parametric-down-conversion. We develop a measurement model accounting for all the uncertainty contributions, and we reach the relative uncertainty of 0.3% in low photon flux regime. This represents a significant step forward for the characterization of (scientific) CCDs used in mesoscopic light regime.
Absolute calibration of photon-number-resolving detectors with an analog output using twin beams
Pe?ina, Jan, E-mail: jan.perina.jr@upol.cz [RCPTM, Joint Laboratory of Optics of Palacký University and Institute of Physics AS CR, 17. listopadu 12, 77146 Olomouc (Czech Republic); Haderka, Ond?ej [Joint Laboratory of Optics of Palacký University and Institute of Physics AS CR, 17. listopadu 12, 771 46 Olomouc (Czech Republic); Allevi, Alessia [Dipartimento di Scienza e Alta Tecnologia, Universitŕ degli Studi dell'Insubria, I-22100 Como (Italy); Bondani, Maria [Istituto di Fotonica e Nanotecnologie, CNR-IFN, I-22100 Como (Italy)
2014-01-27T23:59:59.000Z
A method for absolute calibration of a photon-number resolving detector producing analog signals as the output is developed using a twin beam. The method gives both analog-to-digital conversion parameters and quantum detection efficiency for the photon fields. Characteristics of the used twin beam are also obtained. A simplified variant of the method applicable to fields with high signal to noise ratios and suitable for more intense twin beams is suggested.
Hornbeck, Amaury, E-mail: amauryhornbeck@gmail.com, E-mail: tristan.garcia@cea.fr; Garcia, Tristan, E-mail: amauryhornbeck@gmail.com, E-mail: tristan.garcia@cea.fr [CEA, LIST, Laboratoire National Henri Becquerel, 91191 Gif-sur-Yvette Cedex (France)] [CEA, LIST, Laboratoire National Henri Becquerel, 91191 Gif-sur-Yvette Cedex (France); Cuttat, Marguerite; Jenny, Catherine [Radiotherapy Department, Medical Physics Unit, University Hospital Pitié-Salpętričre, 75013 Paris (France)] [Radiotherapy Department, Medical Physics Unit, University Hospital Pitié-Salpętričre, 75013 Paris (France)
2014-06-15T23:59:59.000Z
Purpose: Elekta Leksell Gamma Knife{sup ®} (LGK) is a radiotherapy beam machine whose features are not compliant with the international calibration protocols for radiotherapy. In this scope, the Laboratoire National Henri Becquerel and the Pitié-Salpętričre Hospital decided to conceive a new LKG dose calibration method and to compare it with the currently used one. Furthermore, the accuracy of the dose delivered by the LGK machine was checked using an “end-to-end” test. This study also aims to compare doses delivered by the two latest software versions of the Gammaplan treatment planning system (TPS). Methods: The dosimetric method chosen is the electron paramagnetic resonance (EPR) of alanine. Dose rate (calibration) verification was done without TPS using a spherical phantom. Absolute calibration was done with factors calculated by Monte Carlo simulation (MCNP-X). For “end-to-end” test, irradiations in an anthropomorphic head phantom, close to real treatment conditions, are done using the TPS in order to verify the delivered dose. Results: The comparison of the currently used calibration method with the new one revealed a deviation of +0.8% between the dose rates measured by ion chamber and EPR/alanine. For simple fields configuration (less than 16 mm diameter), the “end-to-end” tests showed out average deviations of ?1.7% and ?0.9% between the measured dose and the calculated dose by Gammaplan v9 and v10, respectively. Conclusions: This paper shows there is a good agreement between the new calibration method and the currently used one. There is also a good agreement between the calculated and delivered doses especially for Gammaplan v10.
Reda, I.; Hansen, L.; Zeng, J.
2012-08-01T23:59:59.000Z
Advancing climate change research requires accurate and traceable measurement of the atmospheric longwave irradiance. Current measurement capabilities are limited to an estimated uncertainty of larger than +/- 4 W/m2 using the interim World Infrared Standard Group (WISG). WISG is traceable to the Systeme international d'unites (SI) through blackbody calibrations. An Absolute Cavity Pyrgeometer (ACP) is being developed to measure absolute outdoor longwave irradiance with traceability to SI using the temperature scale (ITS-90) and the sky as the reference source, instead of a blackbody. The ACP was designed by NREL and optically characterized by the National Institute of Standards and Technology (NIST). Under clear-sky and stable conditions, the responsivity of the ACP is determined by lowering the temperature of the cavity and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The absolute atmospheric longwave irradiance is then calculated with an uncertainty of +/- 3.96 W/m2 with traceability to SI. The measured irradiance by the ACP was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the WISG. A total of 408 readings was collected over three different clear nights. The calculated irradiance measured by the ACP was 1.5 W/m2 lower than that measured by the two pyrgeometers that are traceable to WISG. Further development and characterization of the ACP might contribute to the effort of improving the uncertainty and traceability of WISG to SI.
Absolute pulse energy measurements of soft x-rays at the Linac Coherent Light Source
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tiedtke, K.; Sorokin, A. A.; Jastrow, U.; Jurani?, P.; Kreis, S.; Gerken, N.; Richter, M.; Arp, U.; Feng, Y.; Nordlund, D.; et al
2014-01-01T23:59:59.000Z
This paper reports novel measurements of x-ray optical radiation on an absolute scale from the intense and ultra-short radiation generated in the soft x-ray regime of a free electron laser. We give a brief description of the detection principle for radiation measurements which was specifically adapted for this photon energy range. We present data characterizing the soft x-ray instrument at the Linac Coherent Light Source (LCLS) with respect to the radiant power output and transmission by using an absolute detector temporarily placed at the downstream end of the instrument. This provides an estimation of the reflectivity of all x-ray opticalmore »elements in the beamline and provides the absolute photon number per bandwidth per pulse. This parameter is important for many experiments that need to understand the trade-offs between high energy resolution and high flux, such as experiments focused on studying materials via resonant processes. Furthermore, the results are compared with the LCLS diagnostic gas detectors to test the limits of linearity, and observations are reported on radiation contamination from spontaneous undulator radiation and higher harmonic content.« less
Human error contribution to nuclear materials-handling events
Sutton, Bradley (Bradley Jordan)
2007-01-01T23:59:59.000Z
This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...
Evolved Error Management Biases in the Attribution of Anger
Galperin, Andrew
2012-01-01T23:59:59.000Z
von Hippel, W. , Poore, J. C. , Buss, D. M. , et al. (under27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error27, 733-763. Haselton, M. G. , & Buss, D. M. (2000). Error
Table 14a. Average Electricity Prices, Projected vs. Actual
Gasoline and Diesel Fuel Update (EIA)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23 362 334 318 706 802 1979-2013October 3,Percent of (Percent) Type: Sulfur Content1,079: Coala. Average
U.S. Reformulated, Average Refiner Gasoline Prices
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010 2011Average Sales Price of CoalYear2009 20102.022 2.346 2.308
"2013 Average Monthly Bill- Commercial"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010 2011Average8a. AppliancesFile 1:4. Total First Use
"2013 Average Monthly Bill- Industrial"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010 2011Average8a. AppliancesFile 1:4. Total First
"2013 Average Monthly Bill- Residential"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010 2011Average8a. AppliancesFile 1:4. Total FirstResidential"
,"Selected National Average Natural Gas Prices"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS National FuelYancey County, NorthDiesel3, 2013TWO Washington,4 Average SquareSelected National
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19T23:59:59.000Z
to the models considered in Tsiatis and Ma (2004), our model is less stringent because it allows an unspecified model error distribution and unspecified covariate distribution, not just the latter. With an unspecified model error distribution, the RMM... with measurement error is a very different problem compared to the model considered in Tsiatis and Ma (2004), where the model error distribution has a known parametric form. Consequently, the semiparamet- ric treatment here is also drastically different. Our...
Franklin Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article) |govInstrumentsmfrirt DocumentationSitesWeather6Environmental1 |MAgnEt forFirstFourth FridayTrouble Shooting and Error
Error Analysis in Nuclear Density Functional Theory
Nicolas Schunck; Jordan D. McDonnell; Jason Sarich; Stefan M. Wild; Dave Higdon
2014-07-11T23:59:59.000Z
Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.
A Taxonomy of Number Entry Error Sarah Wiseman
Subramanian, Sriram
A Taxonomy of Number Entry Error Sarah Wiseman UCLIC MPEB, Malet Place London, WC1E 7JE sarah and the subsequent process of creating a taxonomy of errors from the information gathered. A total of 345 errors were. These codes are then organised into a taxonomy similar to that of Zhang et al (2004). We show how
Susceptibility of Commodity Systems and Software to Memory Soft Errors
Riska, Alma
Susceptibility of Commodity Systems and Software to Memory Soft Errors Alan Messer, Member, IEEE Abstract--It is widely understood that most system downtime is acounted for by programming errors transient errors in computer system hardware due to external factors, such as cosmic rays. This work
Predictors of Threat and Error Management: Identification of Core
Predictors of Threat and Error Management: Identification of Core Nontechnical Skills In normal flight operations, crews are faced with a variety of external threats and commit a range of errors of these threats and errors therefore forms an essential element of enhancing performance and minimizing risk
Bolstered Error Estimation Ulisses Braga-Neto a,c
Braga-Neto, Ulisses
the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian
Error rate and power dissipation in nano-logic devices
Kim, Jong Un
2004-01-01T23:59:59.000Z
Current-controlled logic and single electron logic processors have been investigated with respect to thermal-induced bit error. A maximal error rate for both logic processors is regarded as one bit-error/year/chip. A maximal clock frequency...
Averaged null energy condition and quantum inequalities in curved spacetime
Eleni-Alexandra Kontou
2015-07-22T23:59:59.000Z
The Averaged Null Energy Condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. ANEC can be used to rule out spacetimes with exotic phenomena, such as closed timelike curves, superluminal travel and wormholes. We prove that ANEC is obeyed by a minimally-coupled, free quantum scalar field on any achronal null geodesic (not two points can be connected with a timelike curve) surrounded by a tubular neighborhood whose curvature is produced by a classical source. To prove ANEC we use a null-projected quantum inequality, which provides constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. Starting with a general result of Fewster and Smith, we first derive a timelike projected quantum inequality for a minimally-coupled scalar field on flat spacetime with a background potential. Using that result we proceed to find the bound of a quantum inequality on a geodesic in a spacetime with small curvature, working to first order in the Ricci tensor and its derivatives. The last step is to derive a bound for the null-projected quantum inequality on a general timelike path. Finally we use that result to prove achronal ANEC in spacetimes with small curvature.
Averaged null energy condition and quantum inequalities in curved spacetime
Kontou, Eleni-Alexandra
2015-01-01T23:59:59.000Z
The Averaged Null Energy Condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. ANEC can be used to rule out spacetimes with exotic phenomena, such as closed timelike curves, superluminal travel and wormholes. We prove that ANEC is obeyed by a minimally-coupled, free quantum scalar field on any achronal null geodesic (not two points can be connected with a timelike curve) surrounded by a tubular neighborhood whose curvature is produced by a classical source. To prove ANEC we use a null-projected quantum inequality, which provides constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. Starting with a general result of Fewster and Smith, we first derive a timelike projected quantum inequality for a minimally-coupled scalar field on flat spacetime with a background potential. Using that result we proceed to find the bound of a qu...
Polian, Ilia
of soft errors in modern microprocessors has been reported to never lead to a system failure. Any techniques are enhanced by a methodology to handle soft errors on address bits. Furthermore, we demonstrate]. Consequently, many state-of-the art systems provide soft error detection and correction capabilities [Hass 89
Technological Advancements and Error Rates in Radiation Therapy Delivery
Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA (United States)
2011-11-15T23:59:59.000Z
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.
Locked modes and magnetic field errors in MST
Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.
1992-06-01T23:59:59.000Z
In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.
Measurement strategies for estimating long-term average wind speeds
Ramsdell, J.V.; Houston, S.; Wegley, H.L.
1980-10-01T23:59:59.000Z
The uncertainty and bias in estimates of long-term average wind speeds inherent in continuous and intermittent measurement strategies are examined by simulating the application of the strategies to 40 data sets. Continuous strategies have smaller uncertainties for fixed duration measurement programs, but intermittent strategies make more efficient use of instruments and have smaller uncertainties for a fixed amount of instrument use. Continuous strategies tend to give biased estimates of the long-term annual mean speed unless an integral number of years' data is collected or the measurement program exceeds 3 years in duration. Intermittent strategies with three or more month-long measurement periods per year do not show any tendency toward bias.
Average vertical and zonal F region plasma drifts over Jicamarca
Fejer, B.G.; Gonzalez, S.A. (Utah State Univ., Logan (United States)); de Paula, E.R. (Inst. de Pesquisas Espaciais-INPE, Sao Paulo (Brazil) Utah State Univ., Logan (United States)); Woodman, R.F. (Inst. Geofisico del Peru, Lima (Peru))
1991-08-01T23:59:59.000Z
The seasonal averages of the equatorial F region vertical and zonal plasma drifts are determined using extensive incoherent scatter radar observations from Jicamarca during 1968-1988. The late afternoon and nighttime vertical and zonal drifts are strongly dependent on the 10.7-cm solar flux. The authors show that the evening prereversal enhancement of vertical drifts increases linearly with solar flux during equinox but tends to saturate for large fluxes during southern hemisphere winter. They examine in detail, for the first time, the seasonal variation of the zonal plasma drifts and their dependence on solar flux and magnetic activity. The seasonal effects on the zonal drifts are most pronounced in the midnight-morning sector. The nighttime eastward drifts increase with solar flux for all seasons but decrease slightly with magnetic activity. The daytime westward drifts are essentially independent of season, solar cycle, and magnetic activity.
Average System Cost Methodology : Administrator's Record of Decision.
United States. Bonneville Power Administration.
1984-06-01T23:59:59.000Z
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01T23:59:59.000Z
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors
Przasnyski, Zbigniew; Seal, Kala Chand
2011-01-01T23:59:59.000Z
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.
Integrating human related errors with technical errors to determine causes behind offshore accidents
Aamodt, Agnar
errors were embedded as an integral part of the oil well drilling opera- tion. To reduce the number assessment of the failure. The method is based on a knowledge model of the oil-well drilling process. All of non-productive time (NPT) during oil-well drilling. NPT exhibits a much lower declining trend than
Quantum Error Correction with magnetic molecules
José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Arińo; Guillermo Mínguez Espallargas
2014-08-22T23:59:59.000Z
Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.
Output error identification of hydrogenerator conduit dynamics
Vogt, M.A.; Wozniak, L. (Illinois Univ., Urbana, IL (USA)); Whittemore, T.R. (Bureau of Reclamation, Denver, CO (USA))
1989-09-01T23:59:59.000Z
Two output error model reference adaptive identifiers are considered for estimating the parameters in a reduced order gate position to pressure model for the hydrogenerator. This information may later be useful in an adaptive controller. Gradient and sensitivity functions identifiers are discussed for the hydroelectric application and connections are made between their structural differences and relative performance. Simulations are presented to support the conclusion that the latter algorithm is more robust, having better disturbance rejection and less plant model mismatch sensitivity. For identification from recorded plant data from step gate inputs, the other algorithm even fails to converge. A method for checking the estimated parameters is developed by relating the coefficients in the reduced order model to head, an externally measurable parameter.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M [ORNL] [ORNL; Walker, William C [ORNL] [ORNL
2014-01-01T23:59:59.000Z
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
The concepts of leak before break and absolute reliability of NPP equipment and piping
Getman, A.F.; Komarov, O.V.; Sokov, L.M. [and others
1997-04-01T23:59:59.000Z
This paper describes the absolute reliability (AR) concept for ensuring safe operation of nuclear plant equipment and piping. The AR of a pipeline or component is defined as the level of reliability when the probability of an instantaneous double-ended break is near zero. AR analysis has been applied to Russian RBMK and VVER type reactors. It is proposed that analyses required for application of the leak before break concept should be included in AR implementation. The basic principles, methods, and approaches that provide the basis for implementing the AR concept are described.
Absolute polarization standards at medium and high energies. [200 to 900 MeV
McNaughton, M.W.
1980-01-01T23:59:59.000Z
Although measurement of a polarization asymmetry is rather easy, the normalization of the measurement to obtain the analyzing power requires an absolute knowledge of the beam polarization or comparison with a known standard analyzing power. Such calibration standards can be hard to find. This paper concentrates on medium and higher energies, and divides the techniques into four categories: double scattering, polarized target methods, polarized source methods, and theoretical methods. Secondary standards are also discussed, and earlier data are assessed. 52 references, 6 figures. (RWR)
An absolute quantum energy inequality for the Dirac field in curved spacetime
Calvin J. Smith
2007-05-15T23:59:59.000Z
Quantum Weak Energy Inequalities (QWEIs) are results which limit the extent to which the smeared renormalised energy density of a quantum field can be negative. On globally hyperbolic spacetimes the massive quantum Dirac field is known to obey a QWEI in terms of a reference state chosen arbitrarily from the class of Hadamard states; however, there exist spacetimes of interest on which state-dependent bounds cannot be evaluated. In this paper we prove the first QWEI for the massive quantum Dirac field on four dimensional globally hyperbolic spacetime in which the bound depends only on the local geometry; such a QWEI is known as an absolute QWEI.
Huang, Weidong
2011-01-01T23:59:59.000Z
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Ramanujam, J. "Ram"
- and a is the average number of transitions per clock phase heuristic for peak and average power cycle at the gate
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Michalsky, Joseph; Hodges, G. B.
2014-10-25T23:59:59.000Z
We introduce and evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone at five wavelengths (415, 500, 615, 673 and 870nm), under fully overcast conditions. Our retrieval is based on a one-line semi-analytical equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties, such as cloud optical depth and asymmetry parameter, in the visible and near-infrared spectral range. To illustrate the performance of our retrieval, we use as input measurements of spectral atmospheric transmission from Multi-Filter Rotating Shadowband Radiometer (MFRSR). These MFRSR data are collected at two well-established continental sites in the United States supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program and National Oceanic and Atmospheric Administration (NOAA). The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo. In particular, these comparisons are made at four MFRSR wavelengths (500, 615, 673 and 870nm) and for four seasons (winter, spring, summer and fall) at the ARM site using multi-year (2008-2013) MFRSR and MODIS data. Good agreement, on average, for these wavelengths results in small values (?0.01) of the corresponding root mean square errors (RMSEs) for these two sites. The obtained RMSEs are comparable with those obtained previously for the shortwave albedos (MODIS-derived versus tower-measured) for these sites during growing seasons. We also demonstrate good agreement between tower-based daily-averaged surface albedos measured for “nearby” overcast and non-overcast days. Thus, our retrieval originally developed for overcast conditions likely can be extended for non-overcast days by interpolating between overcast retrievals.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Michalsky, Joseph; Hodges, G. B.
2014-08-22T23:59:59.000Z
We present here a simple retrieval of the areal-averaged and spectrally resolved surface albedo using only ground-based measurements of atmospheric transmission under fully overcast conditions. Our retrieval is based on a one-line equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties in the visible and near-infrared spectral range. The feasibility of our approach for the routine determinations of albedo is demonstrated for different landscapes with various degrees of heterogeneity using three sets of measurements:(1) spectrally resolved atmospheric transmission from Multi-Filter Rotating Shadowband Radiometer (MFRSR) at wavelength 415, 500, 615, 673, and 870 nm, (2) tower-based measurements of local surface albedo at the same wavelengths, and (3) areal-averaged surface albedo at four wavelengths (470, 560, 670 and 860 nm) from collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) observations. These integrated datasets cover both long (2008-2013) and short (April-May, 2010) periods at the ARM Southern Great Plains (SGP) site and the NOAA Table Mountain site, respectively. The calculated root mean square error (RMSE), which is defined here as the root mean squared difference between the MODIS-derived surface albedo and the retrieved area-averaged albedo, is quite small (RMSE?0.01) and comparable with that obtained previously by other investigators for the shortwave broadband albedo. Good agreement between the tower-based daily averages of surface albedo for the completely overcast and non-overcast conditions is also demonstrated. This agreement suggests that our retrieval originally developed for the overcast conditions likely will work for non-overcast conditions as well.
Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom
Benevides, Luis A. [Naval Sea Systems Command,1333 Isaac Hull Avenue, Washington Navy Yard, DC 20376 (United States); Hintenlang, David E. [University of Florida, 202 Nuclear Sciences Center, P.O. Box 1183, Gainesville Florida 32611 (United States)
2011-05-05T23:59:59.000Z
The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.
High average power magnetic modulator for copper lasers
Cook, E.G.; Ball, D.G.; Birx, D.L.; Branum, J.D.; Peluso, S.E.; Langford, M.D.; Speer, R.D.; Sullivan, J.R.; Woods, P.G.
1991-06-14T23:59:59.000Z
Magnetic compression circuits show the promise of long life for operation at high average powers and high repetition rates. When the Atomic Vapor Laser Isotope Separation (AVLIS) Program at Lawrence Livermore National Laboratory needed new modulators to drive their higher power copper lasers in the Laser Demonstration Facility (LDF), existing technology using thyratron switched capacitor inversion circuits did not meet the goal for long lifetimes at the required power levels. We have demonstrated that magnetic compression circuits can achieve this goal. Improving thyratron lifetime is achieved by increasing the thyratron conduction time, thereby reducing the effect of cathode depletion. This paper describes a three stage magnetic modulator designed to provide a 60 kV pulse to a copper laser at a 4. 5 kHz repetition rate. This modulator operates at 34 kW input power and has exhibited MTBF of {approx}1000 hours when using thyratrons and even longer MTBFs with a series of stack of SCRs for the main switch. Within this paper, the electrical and mechanical designs for the magnetic compression circuits are discussed as are the important performance parameters of lifetime and jitter. Ancillary circuits such as the charge circuit and reset circuit are shown. 8 refs., 5 figs., 1 tab.
Frenje, J A; Casey, D T; Li, C K; Rygg, J R; Seguin, F H; Petrasso, R D; Glebov, V Y; Meyerhofer, D D; Sangster, T C; Hatchett, S; Haan, S; Cerjan, C; Landen, O; Moran, M; Song, P; Wilson, D C; Leeper, R J
2008-05-12T23:59:59.000Z
A new type of neutron spectrometer, called a Magnetic Recoil Spectrometer (MRS), has been built and implemented at the OMEGA laser facility [T. R. Boehly. D. L. Brown, R. S. Craxton et al., Opt. Commun. 133, 495 (1997)] for absolute measurements of the neutron spectrum in the range 6 to 30 MeV, from which fuel areal density ({rho}R), ion temperature (T{sub i}) and yield (Y{sub n}) can be determined. The results from the first MRS measurements of the absolute neutron spectrum are presented. In addition, measuring {rho}R at the National Ignition Facility (NIF) [G.H. Miller, E.I. Moses and C.R. Wuest, Nucl. Fusion 44, S228 (2004)] will be essential for assessing implosion performance during all stages of development from surrogate implosions to cryogenic fizzles and ignited implosions. To accomplish this, we are also developing an MRS for the NIF. As much of the R&D and instrument optimization of the MRS at OMEGA is directly applicable to the MRS at the NIF, a description of the MRS design on the NIF is discussed as well.
Absolute Calibration of the Radio Astronomy Flux Density Scale at 22 to 43 GHz Using Planck
Partridge, B; Perley, R A; Stevens, J; Butler, B J; Rocha, G; Walter, B; Zacchei, A
2015-01-01T23:59:59.000Z
The Planck mission detected thousands of extragalactic radio sources at frequencies from 28 to 857 GHz. Planck's calibration is absolute (in the sense that it is based on the satellite's annual motion around the Sun and the temperature of the cosmic microwave background), and its beams are well characterized at sub-percent levels. Thus Planck's flux density measurements of compact sources are absolute in the same sense. We have made coordinated VLA and ATCA observations of 65 strong, unresolved Planck sources in order to transfer Planck's calibration to ground-based instruments at 22, 28, and 43 GHz. The results are compared to microwave flux density scales currently based on planetary observations. Despite the scatter introduced by the variability of many of the sources, the flux density scales are determined to 1-2% accuracy. At 28 GHz, the flux density scale used by the VLA runs 3.6% +- 1.0% below Planck values; at 43 GHz, the discrepancy increases to 6.2% +- 1.4% for both ATCA and the VLA.
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01T23:59:59.000Z
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
Error models in quantum computation: an application of model selection
Lucia Schwarz; Steven van Enk
2013-09-04T23:59:59.000Z
Threshold theorems for fault-tolerant quantum computing assume that errors are of certain types. But how would one detect whether errors of the "wrong" type occur in one's experiment, especially if one does not even know what type of error to look for? The problem is that for many qubits a full state description is impossible to analyze, and a full process description is even more impossible to analyze. As a result, one simply cannot detect all types of errors. Here we show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. The example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size.
A two reservoir model of quantum error correction
James P. Clemens; Julio Gea-Banacloche
2005-08-22T23:59:59.000Z
We consider a two reservoir model of quantum error correction with a hot bath causing errors in the qubits and a cold bath cooling the ancilla qubits to a fiducial state. We consider error correction protocols both with and without measurement of the ancilla state. The error correction acts as a kind of refrigeration process to maintain the data qubits in a low entropy state by periodically moving the entropy to the ancilla qubits and then to the cold reservoir. We quantify the performance of the error correction as a function of the reservoir temperatures and cooling rate by means of the fidelity and the residual entropy of the data qubits. We also make a comparison with the continuous quantum error correction model of Sarovar and Milburn [Phys. Rev. A 72 012306].
Trial application of a technique for human error analysis (ATHEANA)
Bley, D.C. [Buttonwood Consulting, Inc., Oakton, VA (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Parry, G.W. [NUS, Gaithersburg, MD (United States)] [and others
1996-10-01T23:59:59.000Z
The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.
Nonlocal effective-average-action approach to crystalline phantom membranes
Hasselmann, N. [Max Planck Institute for Solid State Research, Heisenbergstrasse 1, D-70569 Stuttgart (Germany); International Institute of Physics, Universidade Federal do Rio Grande do Norte, 59072-970, Natal, RN (Brazil); Braghin, F. L. [International Institute of Physics, Universidade Federal do Rio Grande do Norte, 59072-970, Natal, RN (Brazil); Instituto de Fisica, Universidade Federal de Goias, P. B. 131, Campus II, 74001-970, Goiania, GO (Brazil)
2011-03-15T23:59:59.000Z
We investigate the properties of crystalline phantom membranes, at the crumpling transition and in the flat phase, using a nonperturbative renormalization group approach. We avoid a derivative expansion of the effective average action and instead analyze the full momentum dependence of the elastic coupling functions. This leads to a more accurate determination of the critical exponents and further yields the full momentum dependence of the correlation functions of the in-plane and out-of-plane fluctuation. The flow equations are solved numerically for D=2 dimensional membranes embedded in a d=3 dimensional space. Within our approach we find a crumpling transition of second order which is characterized by an anomalous exponent {eta}{sub c}{approx_equal}0.63(8) and the thermal exponent {nu}{approx_equal}0.69. Near the crumpling transition the order parameter of the flat phase vanishes with a critical exponent {beta}{approx_equal}0.22. The flat phase anomalous dimension is {eta}{sub f}{approx_equal}0.85 and the Poisson's ratio inside the flat phase is found to be {sigma}{sub f}{approx_equal}-1/3. At the crumpling transition we find a much larger negative value of the Poisson's ratio {sigma}{sub c}{approx_equal}-0.71(5). We discuss further in detail the different regimes of the momentum dependent fluctuations, both in the flat phase and in the vicinity of the crumpling transition, and extract the crossover momentum scales which separate them.
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Per Carlson; Conny Wannemark
2005-05-10T23:59:59.000Z
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Error estimates for the Euler discretization of an optimal control ...
Joseph FrĂ©dĂ©ric Bonnans
2014-12-10T23:59:59.000Z
Dec 10, 2014 ... Abstract: We study the error introduced in the solution of an optimal control problem with first order state constraints, for which the trajectories ...
Identification of toroidal field errors in a modified betatron accelerator
Loschialpo, P. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States)); Marsh, S.J. (SFA Inc., Landover, Maryland 20785 (United States)); Len, L.K.; Smith, T. (FM Technologies Inc., 10529-B Braddock Road, Fairfax, Virginia 22032 (United States)); Kapetanakos, C.A. (Beam Physics Branch, Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States))
1993-06-01T23:59:59.000Z
A newly developed probe, having a 0.05% resolution, has been used to detect errors in the toroidal magnetic field of the NRL modified betatron accelerator. Measurements indicate that the radial field components (errors) are 0.1%--1% of the applied toroidal field. Such errors, in the typically 5 kG toroidal field, can excite resonances which drive the beam to the wall. Two sources of detected field errors are discussed. The first is due to the discrete nature of the 12 single turn coils which generate the toroidal field. Both measurements and computer calculations indicate that its amplitude varies from 0% to 0.2% as a function of radius. Displacement of the outer leg of one of the toroidal field coils by a few millimeters has a significant effect on the amplitude of this field error. Because of uniform toroidal periodicity of these coils this error is a good suspect for causing the excitation of the damaging [ital l]=12 resonance seen in our experiments. The other source of field error is due to the current feed gaps in the vertical magnetic field coils. A magnetic field is induced inside the vertical field coils' conductor in the opposite direction of the applied toroidal field. Fringe fields at the gaps lead to additional field errors which have been measured as large as 1.0%. This source of field error, which exists at five toroidal locations around the modified betatron, can excite several integer resonances, including the [ital l]=12 mode.
On Error Estimates of the Penalty Method for Unsteady Navier ...
Nov 26, 2002 ... http://WWW.jstor.org/about/terms.html. ... However, the best error estimates available to the author's knowledge" ... AMS subject classi?cations.
New Fractional Error Bounds for Polynomial Systems with ...
2014-07-27T23:59:59.000Z
techniques are largely based on variational analysis and generalized differentiation, ...... Example 3.10 (failure of global error bounds for polynomial systems).
The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors
Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)
2010-07-15T23:59:59.000Z
Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01T23:59:59.000Z
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Casey, D. T.; Frenje, J. A.; Gatu Johnson, M.; Seguin, F. H.; Li, C. K.; Petrasso, R. D.; Glebov, V. Yu.; Katz, J.; Magoon, J.; Meyerhofer, D. D.; et al
2013-01-01T23:59:59.000Z
The neutron spectrum produced by deuterium-tritium (DT) inertial confinement fusion implosions contains a wealth of information about implosion performance including the DT yield, iontemperature, and areal-density. The Magnetic Recoil Spectrometer (MRS) has been used at both the OMEGA laser facility and the National Ignition Facility (NIF) to measure the absolute neutron spectrum from 3 to 30 MeV at OMEGA and 3 to 36 MeV at the NIF. These measurements have been used to diagnose the performance of cryogenic target implosions to unprecedented accuracy. Interpretation of MRS data requires a detailed understanding of the MRS response and background. This paper describesmore »ab initio characterization of the system involving Monte Carlo simulations of the MRS response in addition to the commission experiments for in situ calibration of the systems on OMEGA and the NIF.« less
Rafael Brada; Mordehai Milgrom
1998-12-21T23:59:59.000Z
We have recently discovered that the modified dynamics (MOND) implies some universal upper bound on the acceleration that can be contributed by a `dark halo'--assumed in a Newtonian analysis to account for the effects of MOND. Not surprisingly, the limit is of the order of the acceleration constant of the theory. This can be contrasted directly with the results of structure-formation simulations. The new limit is substantial and different from earlier MOND acceleration limits (discussed in connection with the MOND explanation of the Freeman law for galaxy disks, and the Fish law for ellipticals): It pertains to the `halo', and not to the observed galaxy; it is absolute, and independent of further physical assumptions on the nature of the galactic system; and it applies at all radii, whereas the other limits apply only to the mean acceleration in the system.
Diagnostics principle of microwave cut-off probe for measuring absolute electron density
Jun, Hyun-Su, E-mail: mtsconst@kaist.ac.kr [Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of)
2014-08-15T23:59:59.000Z
A generalized diagnostics principle of microwave cut-off probe is presented with a full analytical solution. In previous studies on the microwave cut-off measurement of weakly ionized plasmas, the cut-off frequency ?{sub c} of a given electron density is assumed to be equal to the plasma frequency ?{sub p} and is predicted using electromagnetic simulation or electric circuit model analysis. However, for specific plasma conditions such as highly collisional plasma and a very narrow probe tip gap, it has been found that ?{sub c} and ?{sub p} are not equal. To resolve this problem, a generalized diagnostics principle is proposed by analytically solving the microwave cut-off condition Re[?{sub r,eff}(??=??{sub c})]?=?0. In addition, characteristics of the microwave cut-off condition are theoretically tested for correct measurement of the absolute electron density.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Casey, D. T. [MIT, Cambridge, MA (United States). Plasma Science and Fusion Center; Frenje, J. A. [MIT, Cambridge, MA (United States). Plasma Science and Fusion Center; Gatu Johnson, M. [MIT, Cambridge, MA (United States). Plasma Science and Fusion Center; Seguin, F. H. [MIT, Cambridge, MA (United States). Plasma Science and Fusion Center; Li, C. K. [MIT, Cambridge, MA (United States). Plasma Science and Fusion Center; Petrasso, R. D. [MIT, Cambridge, MA (United States). Plasma Science and Fusion Center; Glebov, V. Yu. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Katz, J. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Magoon, J. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Meyerhofer, D. D. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Sangster, T. C. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Shoup, M. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Ulreich, J. [Univ. of Rochester, NY (United States). Lab. for Laser Energitics; Ashabranner, R. C. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Bionta, R. M. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Carpenter, A. C. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Felker, B. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Khater, H. Y. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); LePape, S. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); MacKinnon, A. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); McKernan, M. A. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Moran, M. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Rygg, J. R. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Yeoman, M. F. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Zacharias, R. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Leeper, R. J. [Sandia National Laboratories, Albuquerque, NM (United States); Fletcher, K. [State Univ. of New York at Geneseo, NY (United States); Farrell, M. [General Atomics, San Diego, CA (United States); Jasion, D. [General Atomics, San Diego, CA (United States); Kilkenny, J. [General Atomics, San Diego, CA (United States); Paguio, R. [General Atomics, San Diego, CA (United States)
2013-01-01T23:59:59.000Z
The neutron spectrum produced by deuterium-tritium (DT) inertial confinement fusion implosions contains a wealth of information about implosion performance including the DT yield, iontemperature, and areal-density. The Magnetic Recoil Spectrometer (MRS) has been used at both the OMEGA laser facility and the National Ignition Facility (NIF) to measure the absolute neutron spectrum from 3 to 30 MeV at OMEGA and 3 to 36 MeV at the NIF. These measurements have been used to diagnose the performance of cryogenic target implosions to unprecedented accuracy. Interpretation of MRS data requires a detailed understanding of the MRS response and background. This paper describes ab initio characterization of the system involving Monte Carlo simulations of the MRS response in addition to the commission experiments for in situ calibration of the systems on OMEGA and the NIF.
Kinematic Error Correction for Minimally Invasive Surgical Robots
in two likely sources of kinematic error: port displacement and instrument shaft flexion. For a quasi. To reach the surgical site near the chest wall, the instrument shaft applies significant torque to the port, and the instrument shaft to bend. These kinematic errors impair positioning of the robot and cause deviations from
ARTIFICIAL INTELLIGENCE 223 A Geometric Approach to Error
Richardson, David
may not even exist. For this reason we investigate error detection and recovery (EDR) strategies. We may not even exist. For this reason we investigate error detection and recovery (EDR ) strategies. We and implementational questions remain. The second contribution is a formal, geometric approach to EDR. While EDR
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Bai, Zhaojun
gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance for integrated groundwater models, which are implicitly coupled to another model, such as surface water models the correspondence between the residual error in the preconditioned linear system and the solution error. Using
Numerical Construction of Likelihood Distributions and the Propagation of Errors
J. Swain; L. Taylor
1997-12-12T23:59:59.000Z
The standard method for the propagation of errors, based on a Taylor series expansion, is approximate and frequently inadequate for realistic problems. A simple and generic technique is described in which the likelihood is constructed numerically, thereby greatly facilitating the propagation of errors.
Mining API Error-Handling Specifications from Source Code
Xie, Tao
Mining API Error-Handling Specifications from Source Code Mithun Acharya and Tao Xie Department it difficult to mine error-handling specifications through manual inspection of source code. In this paper, we, without any user in- put. In our framework, we adapt a trace generation technique to distinguish
Calibration and Error in Placental Molecular Clocks: A Conservative
Hadly, Elizabeth
Calibration and Error in Placental Molecular Clocks: A Conservative Approach Using for calibrating both mitogenomic and nucleogenomic placental timescales. We applied these reestimates to the most calibration error may inflate the power of the molecular clock when testing the time of ordinal
Error detection through consistency checking Peng Gong* Lan Mu#
Silver, Whendee
Error detection through consistency checking Peng Gong* Lan Mu# *Center for Assessment & Monitoring Hall, University of California, Berkeley, Berkeley, CA 94720-3110 gong@nature.berkeley.edu mulan, accessibility, and timeliness as recorded in the lineage data (Chen and Gong, 1998). Spatial error refers
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar*
Potkonjak, Miodrag
ERROR-TOLERANT MULTI-MODAL SENSOR FUSION Farinaz Koushanfar* , Sasha Slijepcevic , Miodrag is multi-modal sensor fusion, where data from sensors of dif- ferent modalities are combined in order applications, including multi- modal sensor fusion, is to ensure that all of the techniques and tools are error
Mutual information, bit error rate and security in Wójcik's scheme
Zhanjun Zhang
2004-02-21T23:59:59.000Z
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out.
Kernel Regression with Correlated Errors K. De Brabanter
Kernel Regression with Correlated Errors K. De Brabanter , J. De Brabanter , , J.A.K. Suykens B: It is a well-known problem that obtaining a correct bandwidth in nonparametric regression is difficult support vector machines for regression. Keywords: nonparametric regression, correlated errors, short
Ridge Regression Estimation Approach to Measurement Error Model
Shalabh
Ridge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton of the regression parameters is ill conditioned. We consider the Hoerl and Kennard type (1970) ridge regression (RR) modifications of the five quasi- empirical Bayes estimators of the regression parameters of a measurement error
Solving LWE problem with bounded errors in polynomial time
International Association for Cryptologic Research (IACR)
Solving LWE problem with bounded errors in polynomial time Jintai Ding1,2 Southern Chinese call the learning with bounded errors (LWBE) problems, we can solve it with complexity O(nD ). Keywords, this problem corresponds to the learning parity with noise (LPN) problem. There are several ways to solve
Error Control of Iterative Linear Solvers for Integrated Groundwater Models
Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq
2010-01-01T23:59:59.000Z
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...
Grid-scale Fluctuations and Forecast Error in Wind Power
Bel, G; Toots, M; Bandi, M M
2015-01-01T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
An Efficient Approach towards Mitigating Soft Errors Risks
Sadi, Muhammad Sheikh; Uddin, Md Nazim; Jürjens, Jan
2011-01-01T23:59:59.000Z
Smaller feature size, higher clock frequency and lower power consumption are of core concerns of today's nano-technology, which has been resulted by continuous downscaling of CMOS technologies. The resultant 'device shrinking' reduces the soft error tolerance of the VLSI circuits, as very little energy is needed to change their states. Safety critical systems are very sensitive to soft errors. A bit flip due to soft error can change the value of critical variable and consequently the system control flow can completely be changed which leads to system failure. To minimize soft error risks, a novel methodology is proposed to detect and recover from soft errors considering only 'critical code blocks' and 'critical variables' rather than considering all variables and/or blocks in the whole program. The proposed method shortens space and time overhead in comparison to existing dominant approaches.
Grid-scale Fluctuations and Forecast Error in Wind Power
G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi
2015-03-29T23:59:59.000Z
The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).
Antonio Enea Romano
2007-01-27T23:59:59.000Z
We show that positive averaged acceleration obtained in LTB models through spatial averaging can require integration over a region beyond the event horizon of the central observer. We provide an example of a LTB model with positive averaged acceleration in which the luminosity distance does not contain information about the entire spatially averaged region, making the averaged acceleration unobservable. Since the cosmic acceleration is obtained from fitting the observed luminosity distance to a FRW model we conclude that in general a positive averaged acceleration in LTB models does not imply a positive FRW cosmic acceleration.
Romano, A E
2006-01-01T23:59:59.000Z
We show that positive averaged acceleration obtained in LTB models through spatial averaging can require integration over a region beyond the event horizon of the central observer. We provide an example of a LTB model with positive averaged acceleration in which the luminosity distance does not contain information about the entire spatially averaged region, making the averaged acceleration unobservable. Since the cosmic acceleration is obtained from fitting the observed luminosity distance to a FRW model we conclude that in general a positive averaged acceleration in LTB models does not imply a positive FRW cosmic acceleration.
Fu, Weihua, E-mail: fuw@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA (United States); Yang, Yong [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA (United States); Yue, Ning J. [Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, The Cancer Institute of New Jersey, New Brunswick, NJ (United States); Heron, Dwight E.; Saiful Huq, M. [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA (United States)
2013-07-01T23:59:59.000Z
The purpose of this work is to investigate the dosimetric influence of the residual rotational setup errors on head and neck carcinoma (HNC) intensity-modulated radiation therapy (IMRT) with routine 3 translational setup corrections and the adequacy of this routine correction. A total of 66 kV cone beam computed tomography (CBCT) image sets were acquired on the first day of treatment and weekly thereafter for 10 patients with HNC and were registered with the corresponding planning CT images, using 2 3-dimensional (3D) rigid registration methods. Method 1 determines the translational setup errors only, and method 2 determines 6-degree (6D) setup errors, i.e., both rotational and translational setup errors. The 6D setup errors determined by method 2 were simulated in the treatment planning system and were then corrected using the corresponding translational data determined by method 1. For each patient, dose distributions for 6 to 7 fractions with various setup uncertainties were generated, and a plan sum was created to determine the total dose distribution through an entire course and was compared with the original treatment plan. The average rotational setup errors were 0.7°± 1.0°, 0.1°±1.9°, and 0.3°±0.7° around left-right (LR), anterior-posterior (AP), and superior-inferior (SI) axes, respectively. With translational corrections determined by method 1 alone, the dose deviation could be large from fraction to fraction. For a certain fraction, the decrease in prescription dose coverage (V{sub p}) and the dose that covers 95% of target volume (D{sub 95}) could be up to 15.8% and 13.2% for planning target volume (PTV), and the decrease in V{sub p} and the dose that covers 98% of target volume (D{sub 98}) could be up to 9.8% and 5.5% for the clinical target volume (CTV). However, for the entire treatment course, for PTV, the plan sum showed that the average V{sub p} was decreased by 4.2% and D{sub 95} was decreased by 1.2 Gy for the first phase of IMRT with a prescription dose of 50 Gy. For CTV, the plan sum showed that the average V{sub p} was decreased by 0.8% and D{sub 98}, relative to prescription dose, was not decreased. Among these 10 patients, the plan sum showed that the dose to 1-cm{sup 3} spinal cord (D{sub 1cm{sup 3}}) increased no more than 1 Gy for 7 patients and more than 2 Gy for 2 patients. The average increase in D{sub 1cm{sup 3}} was 1.2 Gy. The study shows that, with translational setup error correction, the overall CTV V{sub p} has a minor decrease with a 5-mm margin from CTV to PTV. For the spinal cord, a noticeable dose increase was observed for some patients. So to decide whether the routine clinical translational setup error correction is adequate for this HNC IMRT technique, the dosimetric influence of rotational setup errors should be evaluated carefully from case to case when organs at risk are in close proximity to the target.
Doolan, P [University College London, London (United Kingdom); Massachusetts General Hospital, Boston, MA (United States); Dias, M [Massachusetts General Hospital, Boston, MA (United States); Dipartamento di Elettronica, Informazione e Bioingegneria - DEIB, Politecnico di Milano (Italy); Collins Fekete, C [Massachusetts General Hospital, Boston, MA (United States); Departement de physique, de genie physique et d'optique et Centre de recherche sur le cancer, Universite Laval, Quebec (Canada); Seco, J [Massachusetts General Hospital, Boston, MA (United States)
2014-06-01T23:59:59.000Z
Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)
Polikar, Robi
Model comparison for automatic characterization and classification of average ERPs using visual December 2008 Keywords: EEG ERP Attention P300 N200 Oddball Pattern recognition Linear discriminant responses from averaged event-related potentials (ERPs) along with identifying appropriate features
Fact #638: August 30, 2010 Average Expenditure for a New Car...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
8: August 30, 2010 Average Expenditure for a New Car Declines in Relation to Family Earnings Fact 638: August 30, 2010 Average Expenditure for a New Car Declines in Relation to...
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center
1997-10-01T23:59:59.000Z
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01T23:59:59.000Z
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore »measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study,Taqpolymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, clonedPfupolymerase, Phusion Hot Start, andPwopolymerase, we find the lowest error rates withPfu, Phusion, andPwopolymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed withTaqpolymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Logical Error Rate Scaling of the Toric Code
Fern H. E. Watson; Sean D. Barrett
2014-09-26T23:59:59.000Z
To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead -- the total number of physical qubits required to perform error correction.
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-01-13T23:59:59.000Z
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-06-30T23:59:59.000Z
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore »state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01T23:59:59.000Z
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01T23:59:59.000Z
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Average cost optimal threshold strategies for remote estimation with communication cost
Mahajan, Aditya
Average cost optimal threshold strategies for remote estimation with communication cost Jhelum, then the estimator must estimate the Markov process using its past observations. We study the average cost problem and the optimal thresholds as a function of communication cost. The average cost problem is investigated
Direct and absolute temperature mapping and heat transfer measurements in diode-end-pumped Yb:YAG
Paris-Sud XI, Université de
Direct and absolute temperature mapping and heat transfer measurements in diode-end-pumped Yb and heat sink grease respectively). The dynamics of thermal effects is also presented. PACS 42.55.Xi (Diode-pumped in a diode-end-pumped Yb:YAG crystal, using a calibrated infrared camera, with a 60-µm spatial resolution
Gelb, Michael
for selective enrichment of tag peptides. Another cysteine peptide enrichment and isotope tagging scheme hasDesign and Synthesis of Visible Isotope-Coded Affinity Tags for the Absolute Quantification spectrometry is most useful when quantitative data is also obtained. We recently introduced isotope
Massari, D.; Ferraro, F. R.; Dalessandro, E.; Lanzoni, B. [Dipartimento di Fisica e Astronomia, Universitŕ degli Studi di Bologna, v.le Berti Pichat 6/2, I-40127 Bologna (Italy); Bellini, A.; Van der Marel, R. P.; Anderson, J. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)
2013-12-10T23:59:59.000Z
We have measured absolute proper motions for the three populations intercepted in the direction of the Galactic globular cluster NGC 6681: the cluster itself, the Sagittarius dwarf spheroidal galaxy, and the field. For this, we used Hubble Space Telescope ACS/WFC and WFC3/UVIS optical imaging data separated by a temporal baseline of 5.464 yr. Five background galaxies were used to determine the zero point of the absolute-motion reference frame. The resulting absolute proper motion of NGC 6681 is (?{sub ?}cos ?, ?{sub ?}) = (1.58 ± 0.18, –4.57 ± 0.16) mas yr{sup –1}. This is the first estimate ever made for this cluster. For the Sgr dSph we obtain (?{sub ?}cos ?, ?{sub ?}) = –2.54 ± 0.18, –1.19 ± 0.16) mas yr{sup –1}, consistent with previous measurements and with the values predicted by theoretical models. The absolute proper motion of the Galaxy population in our field of view is (?{sub ?}cos ?, ?{sub ?}) = (– 1.21 ± 0.27, –4.39 ± 0.26) mas yr{sup –1}. In this study we also use background Sagittarius Dwarf Spheroidal stars to determine the rotation of the globular cluster in the plane of the sky and find that NGC 6681 is not rotating significantly: v {sub rot} = 0.82 ± 1.02 km s{sup –1} at a distance of 1' from the cluster center.
Khare, Sanjay V.
by alternating 110 steps, which form 100 and 110 nanofacets with the terrace. Relative step energiesAbsolute orientation-dependent anisotropic TiN,,111... island step energies and stiffnesses from of the island per unit TiN area. We find that for alternating S1 and S2 110 steps, the ratio 1 / 2 0.72 0
Libbrecht, Kenneth G.
A versatile thermoelectric temperature controller with 10 mK reproducibility and 100 mK absolute December 2009 We describe a general-purpose thermoelectric temperature controller with 1 mK stability, 10 m elements and thermoelectric modules to heat or cool in the 40 to 40 °C range. A schematic of our controller
Meirovitch, Hagai
Absolute Free Energy and Entropy of a Mobile Loop of the Enzyme Acetylcholinesterase Mihail dissociation measurements suggest that the free-energy (F) penalty for the loop displacement is F ) Ffree contribution of water to the total free energy. Namely, for water densities close to the experimental value
Absolute frequency measurement of the In$^{+}$ clock transition with a mode-locked laser
J. von Zanthier; Th. Becker; M. Eichenseer; A. Yu. Nevsky; Ch. Schwedes; E. Peik; H. Walther; R. Holzwarth; J. Reichert; Th. Udem; T. W. Hänsch; P. V. Pokasov; M. N. Skvortsov; S. N. Bagayev
2000-10-05T23:59:59.000Z
The absolute frequency of the In$^{+}$ $5s^{2 1}S_{0}$ - $5s5p^{3}P_{0}$ clock transition at 237 nm was measured with an accuracy of 1.8 parts in $10^{13}$. Using a phase-coherent frequency chain, we compared the $^{1}S_{0}$ - $^{3}P_{0}$ transition with a methane-stabilized He-Ne laser at 3.39 $\\mu$m which was calibrated against an atomic cesium fountain clock. A frequency gap of 37 THz at the fourth harmonic of the He-Ne standard was bridged by a frequency comb generated by a mode-locked femtosecond laser. The frequency of the In$^{+}$ clock transition was found to be $1 267 402 452 899.92 (0.23)$ kHz, the accuracy being limited by the uncertainty of the He-Ne laser reference. This represents an improvement in accuracy of more than 2 orders of magnitude on previous measurements of the line and now stands as the most accurate measurement of an optical transition in a single ion.
Population effects on the red giant clump absolute magnitude The K-band
Salaris, M; Salaris, Maurizio
2002-01-01T23:59:59.000Z
We present a detailed analysis of the behaviour of the Red Clump K-band absolute magnitude (M(K,RC)) in simple and composite stellar populations, in light of its use as standard candle for distance determinations. The advantage of using M(K,RC), following recent empirical calibrations of its value for the solar neighbourhood, arises from its very low sensitivity to the extinction by interstellar dust. We provide data and equations which allow the determination of the K-band population correction Delta(M(K,RC)) (difference between the Red Clump brightness in the solar neighbourhood and in the population under scrutiny) for any generic stellar population. These data complement the results presented in Girardi & Salaris(2001) for the V- and I-band. We show how data from galactic open clusters consistently support our predicted Delta(M(V,RC)), Delta(M(I,RC)) and Delta(M(K,RC)) values. Multiband VIK population corrections for various galaxy systems are provided. They can be used in conjunction with the method ...
A self-consistent, absolute isochronal age scale for young moving groups in the solar neighbourhood
Bell, Cameron P M; Naylor, Tim
2015-01-01T23:59:59.000Z
We present a self-consistent, absolute isochronal age scale for young (solar neighbourhood based on homogeneous fitting of semi-empirical pre-main-sequence model isochrones using the tau^2 maximum-likelihood fitting statistic of Naylor & Jeffries in the M_V, V-J colour-magnitude diagram. The final adopted ages for the groups are: 149+51-19 Myr for the AB Dor moving group, 24+/-3 Myr for the {\\beta} Pic moving group (BPMG), 45+11-7 Myr for the Carina association, 42+6-4 Myr for the Columba association, 11+/-3 Myr for the {\\eta} Cha cluster, 45+/-4 Myr for the Tucana-Horologium moving group (Tuc-Hor), 10+/-3 Myr for the TW Hya association, and 22+4-3 Myr for the 32 Ori group. At this stage we are uncomfortable assigning a final, unambiguous age to the Argus association as our membership list for the association appears to suffer from a high level of contamination, and therefore it remains unclear whether these stars represent a single population of co...
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
Ackermann, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Ajello, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Allafort, A.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Atwood, W.B.; /UC, Santa Cruz; Axelsson, M.; /Stockholm U. /Stockholm U., OKC /Royal Inst. Tech., Stockholm; Baldini, L.; /INFN, Pisa; Barbiellini, G.; /INFN, Trieste /Trieste U.; Bastieri, D.; /INFN, Padua /Padua U.; Bechtol, K.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Bellazzini, R.; /INFN, Pisa; Berenji, B.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Bloom, E.D.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Bonamente, E.; /INFN, Perugia /Perugia U.; Borgland, A.W.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Bouvier, A.; /UC, Santa Cruz; Bregeon, J.; /INFN, Pisa; Brez, A.; /INFN, Pisa; Brigida, M.; /Bari Polytechnic /INFN, Bari; Bruel, P.; /Ecole Polytechnique; Buehler, R.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Buson, S.; /INFN, Padua /Padua U. /CSIC, Catalunya /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /IASF, Milan /DAPNIA, Saclay /INFN, Perugia /Perugia U. /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Unlisted, US /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /ASDC, Frascati /Perugia U. /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Montpellier U. /ASDC, Frascati /Bari Polytechnic /INFN, Bari /Naval Research Lab, Wash., D.C. /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Montpellier U. /Bari Polytechnic /INFN, Bari /Ecole Polytechnique /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Ecole Polytechnique /Hiroshima U. /Stanford U., HEPL /SLAC /KIPAC, Menlo Park /Bari Polytechnic /INFN, Bari /INFN, Bari /NASA, Goddard /INFN, Perugia /Perugia U.; /more authors..
2012-09-20T23:59:59.000Z
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.
Servo control booster system for minimizing following error
Wise, William L. (Mountain View, CA)
1985-01-01T23:59:59.000Z
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Sensitivity of OFDM Systems to Synchronization Errors and Spatial Diversity
Zhou, Yi
2012-02-14T23:59:59.000Z
jitter cause inter-carrier interference. The overall system performance in terms of symbol error rate is limited by the inter-carrier interference. For a reliable information reception, compensatory measures must be taken. The second part...
Universally Valid Error-Disturbance Relations in Continuous Measurements
Atsushi Nishizawa; Yanbei Chen
2015-05-31T23:59:59.000Z
In quantum physics, measurement error and disturbance were first naively thought to be simply constrained by the Heisenberg uncertainty relation. Later, more rigorous analysis showed that the error and disturbance satisfy more subtle inequalities. Several versions of universally valid error-disturbance relations (EDR) have already been obtained and experimentally verified in the regimes where naive applications of the Heisenberg uncertainty relation failed. However, these EDRs were formulated for discrete measurements. In this paper, we consider continuous measurement processes and obtain new EDR inequalities in the Fourier space: in terms of the power spectra of the system and probe variables. By applying our EDRs to a linear optomechanical system, we confirm that a tradeoff relation between error and disturbance leads to the existence of an optimal strength of the disturbance in a joint measurement. Interestingly, even with this optimal case, the inequality of the new EDR is not saturated because of doublely existing standard quantum limits in the inequality.
Robust mixtures in the presence of measurement errors
Jianyong Sun; Ata Kaban; Somak Raychaudhury
2007-09-06T23:59:59.000Z
We develop a mixture-based approach to robust density modeling and outlier detection for experimental multivariate data that includes measurement error information. Our model is designed to infer atypical measurements that are not due to errors, aiming to retrieve potentially interesting peculiar objects. Since exact inference is not possible in this model, we develop a tree-structured variational EM solution. This compares favorably against a fully factorial approximation scheme, approaching the accuracy of a Markov-Chain-EM, while maintaining computational simplicity. We demonstrate the benefits of including measurement errors in the model, in terms of improved outlier detection rates in varying measurement uncertainty conditions. We then use this approach in detecting peculiar quasars from an astrophysical survey, given photometric measurements with errors.
Predicting Intentional Tax Error Using Open Source Literature and Data
for each PUMS respondent (or agent), in certain line item/taxpayer categories, allowing us to construct dis-Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Results of Meta-Analysis 12 6 Intentional Error in Line Items/Taxpayer Categories 13 6
Diagnosing multiplicative error by lensing magnification of type Ia supernovae
Zhang, Pengjie
2015-01-01T23:59:59.000Z
Weak lensing causes spatially coherent fluctuations in flux of type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to measure accurately in auto correlation, its cross correlation with cosmic shear and galaxy distribution in overlapping area can be measured to significantly higher accuracy. Therefore these cross correlations can put useful constraint on multiplicative error, and the obtained constraint is free of cosmic variance in weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with $\\sim 1$ million SNe Ia that can be achieved by the proposed D2k survey with the LSST telescope (Zhan et al. 2008), multiplicative error of $\\sim 0.5\\%$ for source galaxies at $z_s\\sim 1$ can be detected and la...
Inflated applicants: Attribution errors in performance evaluation by professionals
Swift, Samuel; Moore, Don; Sharek, Zachariah; Gino, Francesca
2013-01-01T23:59:59.000Z
performance among applicants from each ‘‘type’’ of school.and interview performance. Each school provided multi-yearschool, PLOS ONE | www.plosone.org July 2013 | Volume 8 | Issue 7 | e69258 Attribution Errors in Performance
Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola
Oregon, University of
of the pyranometer to briefly shade the pyranometer once a minute. Direct hori- zontal irradiance is calculated used in programs evaluating the performance of photovoltaic systems, and systematic errors in the data
Honest Confidence Intervals for the Error Variance in Stepwise Regression
Stine, Robert A.
Honest Confidence Intervals for the Error Variance in Stepwise Regression Dean P. Foster and Robert alternatives are used. These simpler algorithms (e.g., forward or backward stepwise regression) obtain
Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint
Hodge, B. M.; Milligan, M.
2011-03-01T23:59:59.000Z
In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan
Kaeli, David R.
A Taxonomy to Enable Error Recovery and Correction in Software Vilas Sridharan ECE Department years, reliability research has largely used the following taxonomy of errors: Undetected Errors Errors (CE). While this taxonomy is suitable to characterize hardware error detection and correction
TESLA-FEL 2009-07 Errors in Reconstruction of Difference Orbit
Contents 1 Introduction 1 2 Standard Least Squares Solution 2 3 Error Emittance and Error Twiss Parameters as the position of the reconstruction point changes, we will introduce error Twiss parameters and invariant error in the point of interest has to be achieved by matching error Twiss parameters in this point to the desired
Suboptimal quantum-error-correcting procedure based on semidefinite programming
Naoki Yamamoto; Shinji Hara; Koji Tsumura
2006-06-13T23:59:59.000Z
In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.
Mesoscale predictability and background error convariance estimation through ensemble forecasting
Ham, Joy L
2002-01-01T23:59:59.000Z
MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE December 2002 Major Subject: Atmospheric Sciences MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies...
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)
2012-07-03T23:59:59.000Z
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Fault-Tolerant Thresholds for Encoded Ancillae with Homogeneous Errors
Bryan Eastin
2006-11-14T23:59:59.000Z
I describe a procedure for calculating thresholds for quantum computation as a function of error model given the availability of ancillae prepared in logical states with independent, identically distributed errors. The thresholds are determined via a simple counting argument performed on a single qubit of an infinitely large CSS code. I give concrete examples of thresholds thus achievable for both Steane and Knill style fault-tolerant implementations and investigate their relation to threshold estimates in the literature.
Mesoscale predictability and background error convariance estimation through ensemble forecasting
Ham, Joy L
2002-01-01T23:59:59.000Z
MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE December 2002 Major Subject: Atmospheric Sciences MESOSCALE PREDICTABILITY AND BACKGROUND ERROR COVARIANCE ESTIMATION THROUGH ENSEMBLE FORECASTING A Thesis by JOY L. HAM Submitted to the Office of Graduate Studies...
Coding Techniques for Error Correction and Rewriting in Flash Memories
Mohammed, Shoeb Ahmed
2010-10-12T23:59:59.000Z
CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2010 Major Subject: Electrical Engineering CODING TECHNIQUES FOR ERROR CORRECTION AND REWRITING IN FLASH MEMORIES A Thesis by SHOEB AHMED MOHAMMED Submitted to the Office of Graduate Studies of Texas A&M University in partial...
Compiler-Assisted Detection of Transient Memory Errors
Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-06-09T23:59:59.000Z
The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.
Meirovitch, Hagai
New Method for Calculating the Absolute Free Energy of Binding: The Effect of a Mobile Loop energy and entropy. HSMD is extended here for the first time for calculating the absolute free energy change to the total free energy of binding is calculated here for the first time. Our result, A0 ) -24
Meirovitch, Hagai
Lower and upper bounds for the absolute free energy by the hypothetical scanning Monte Carlo method The hypothetical scanning HS method is a general approach for calculating the absolute entropy S and free energy F to provide the free energy through the analysis of a single configuration. © 2004 American Institute
"RSE Table E2.1. Relative Standard Errors for Table E2.1;"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010Electric Sales, Revenue, and AverageE2.1. Relative Standard Errors
EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITY OF DIPOLE MAGNETS FOR THE SSC
Meuser, R.B.
2010-01-01T23:59:59.000Z
in Fig. 2. Table 2. Manufacturing Error Mode Groups13-16, 1985 EFFECT OF MANUFACTURING ERRORS ON FIELD QUALITYMag Note-27 EFFECT OF MANUFACTURING ERRORS ON FIELO QUALITY
A new and efficient error resilient entropy code for image and video compression
Min, Jungki
1999-01-01T23:59:59.000Z
Image and video compression standards such as JPEG, MPEG, H.263 are severely sensitive to errors. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization causes the worst result. Even one bit error...
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors
Sarkar, Abhra
2014-06-24T23:59:59.000Z
Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with errors-in-covariates are available ...
Absolute diffuse calibration of IRAC through mid-infrared and radio study of HII regions
Martin Cohen; Anne J. Green; Marilyn R. Meade; Brian Babler; Remy Indebetouw; Barbara A. Whitney; Christer Watson; Mark Wolfire; Mike J. Wolff; John S. Mathis; Edward B. Churchwell; .
2006-10-19T23:59:59.000Z
We investigate the diffuse absolute calibration of the InfraRed Array Camera on the Spitzer Space Telescope at 8.0microns using a sample of 43 HII regions with a wide range of morphologies near GLON=312deg. For each region we carefully measure sky-subtracted,point-source- subtracted, areally-integrated IRAC 8.0-micron fluxes and compare these with Midcourse Space eXperiment (MSX) 8.3-micron images at two different spatial resolutions, and with radio continuum maps. We determine an accurate median ratio of IRAC 8.0-micron/MSX\\8.3-micron fluxes, of 1.55+/-0.15. From robust spectral energy distributions of these regions we conclude that the present 8.0-micron diffuse calibration of the SST is 36% too high compared with the MSX validated calibration, perhaps due to scattered light inside the camera. This is an independent confirmation of the result derived for the diffuse calibration of IRAC by the Spitzer Science Center (SSC). From regression analyses we find that 843-MHz radio fluxes of HII regions and mid-infrared (MIR) fluxes are linearly related for MSX at 8.3-microns and Spitzer at 8.0 microns, confirming the earlier MSX result by Cohen & Green. The median ratio of MIR/843-MHz diffuse continuum fluxes is 600 times smaller in nonthermal than thermal regions, making it a sharp discriminant. The ratios are largely independent of morphology up to a size of ~24 arcsec. We provide homogeneous radio and MIR morphologies for all sources. MIR morphology is not uniquely related to radio structure. Compact regions may have MIR filaments and/or diffuse haloes, perhaps infrared counter- parts to weakly ionized radio haloes found around compact HII regions. We offer two IRAC colour-colour plots as quantitative diagnostics of diffuse HII regions.
A New Light-Speed Anisotropy Experiment: Absolute Motion and Gravitational Waves Detected
Reginald T Cahill
2006-10-11T23:59:59.000Z
Data from a new experiment measuring the anisotropy of the one-way speed of EM waves in a coaxial cable, gives the speed of light as 300,000+/-400+/-20km/s in a measured direction RA=5.5+/-2hrs, Dec=70+/-10deg S, is shown to be in excellent agreement with the results from seven previous anisotropy experiments, particularly those of Miller (1925/26), and even those of Michelson and Morley (1887). The Miller gas-mode interferometer results, and those from the RF coaxial cable experiments of Torr and Kolen (1983), De Witte (1991) and the new experiment all reveal the presence of gravitational waves, as indicated by the last +/- variations above, but of a kind different from those supposedly predicted by General Relativity. The understanding of the operation of the Michelson interferometer in gas-mode was only achieved in 2002 and involved a calibration for the interferometer that necessarily involved Special Relativity effects and the refractive index of the gas in the light paths. The results demonstrate the reality of the Fitzgerald-Lorentz contraction as an observer independent relativistic effect. A common misunderstanding is that the anisotropy of the speed of light is necessarily in conflict with Special Relativity and Lorentz symmetry - this is explained. All eight experiments and theory show that we have both anisotropy of the speed of light and relativistic effects, and that a dynamical 3-space exists - that absolute motion through that space has been repeatedly observed since 1887. These developments completely change fundamental physics and our understanding of reality.
G. L. Fogli; E. Lisi; A. Marrone; A. Melchiorri; A. Palazzo; P. Serra; J. Silk; A. Slosar
2006-08-04T23:59:59.000Z
In the light of recent neutrino oscillation and non-oscillation data, we revisit the phenomenological constraints applicable to three observables sensitive to absolute neutrino masses: The effective neutrino mass in single beta decay (m_beta); the effective Majorana neutrino mass in neutrinoless double beta decay (m_2beta); and the sum of neutrino masses in cosmology (Sigma). In particular, we include the constraints coming from the first Main Injector Neutrino Oscillation Search (MINOS) data and from the Wilkinson Microwave Anisotropy Probe (WMAP) three-year (3y) data, as well as other relevant cosmological data and priors. We find that the largest neutrino squared mass difference is determined with a 15% accuracy (at 2-sigma) after adding MINOS to world data. We also find upper bounds on the sum of neutrino masses Sigma ranging from ~2 eV (WMAP-3y data only) to ~0.2 eV (all cosmological data) at 2-sigma, in agreement with previous studies. In addition, we discuss the connection of such bounds with those placed on the matter power spectrum normalization parameter sigma_8. We show how the partial degeneracy between Sigma and sigma_8 in WMAP-3y data is broken by adding further cosmological data, and how the overall preference of such data for relatively high values of sigma_8 pushes the upper bound of Sigma in the sub-eV range. Finally, for various combination of data sets, we revisit the (in)compatibility between current Sigma and m_2beta constraints (and claims), and derive quantitative predictions for future single and double beta decay experiments.
G. L. Fogli; E. Lisi; A. Marrone; A. Melchiorri; A. Palazzo; P. Serra; J. Silk
2004-11-17T23:59:59.000Z
In the context of three-flavor neutrino mixing, we present a thorough study of the phenomenological constraints applicable to three observables sensitive to absolute neutrino masses: The effective neutrino mass in Tritium beta decay (m_beta); the effective Majorana neutrino mass in neutrinoless double beta decay (m_2beta); and the sum of neutrino masses in cosmology (Sigma). We discuss the correlations among these variables which arise from the combination of all the available neutrino oscillation data, in both normal and inverse neutrino mass hierarchy. We set upper limits on m_beta by combining updated results from the Mainz and Troitsk experiments. We also consider the latest results on m_2beta from the Heidelberg-Moscow experiment, both with and without the lower bound claimed by such experiment. We derive upper limits on Sigma from an updated combination of data from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite and the 2 degrees Fields (2dF) Galaxy Redshifts Survey, with and without Lyman-alpha forest data from the Sloan Digital Sky Survey (SDSS), in models with a non-zero running of the spectral index of primordial inflationary perturbations. The results are discussed in terms of two-dimensional projections of the globally allowed region in the (m_beta,m_2beta,Sigma) parameter space, which neatly show the relative impact of each data set. In particular, the (in)compatibility between Sigma and m_2beta constraints is highlighted for various combinations of data. We also briefly discuss how future neutrino data (both oscillatory and non-oscillatory) can further probe the currently allowed regions.
Absolute kinematics of radio source components in the complete S5 polar cap sample
M. A. Perez-Torres; J. M. Marcaide; J. C. Guirado; E. Ros
2004-08-31T23:59:59.000Z
We observed the thirteen extragalactic radio sources of the complete S5 polar cap sample at 15.4 GHz with the Very Long Baseline Array, on 27 July 1999 (1999.57) and 15 June 2000 (2000.46). We present the maps from those two epochs, along with maps obtained from observations of the 2 cm VLBA survey for some of the sources of the sample, making a total of 40 maps. We discuss the apparent morphological changes displayed by the radio sources between the observing epochs. Our VLBA observations correspond to the first two epochs at 15.4 GHz of a program to study the absolute kinematics of the radio source components of the members of the sample, by means of phase delay astrometry at 8.4 GHz, 15.4 GHz, and 43 GHz. Our 15.4 GHz VLBA imaging allowed us to disentangle the inner milliarcsecond structure of some of the sources, thus resolving components that appeared blended at 8.4 GHz. For most of the sources, we identified the brightest feature in each radio source with the core. These identifications are supported by the spectral index estimates for those brightest features, which are in general flat, or even inverted. Most of the sources display core-dominance in the overall emission. We find that three of the sources have their most inverted spectrum component shifted with respect to the origin in the map, which approximately coincides with the peak-of-brightness at both 15.4 GHz and 8.4 GHz.
V-228: RealPlayer Buffer Overflow and Memory Corruption Error...
Broader source: Energy.gov (indexed) [DOE]
a memory corruption error and execute arbitrary code on the target system. IMPACT: Access control error SOLUTION: vendor recommends upgrading to version 16.0.3.51 Addthis...
High-average-power, diode-pumped solid state lasers for energy and industrial applications
Krupke, W.F.
1994-03-02T23:59:59.000Z
Progress at LLNL in the development high-average-power diode-pumped solid state lasers is summarized, including the development of enabling technologies.
Absolute frequency measurements of 85Rb nF7/2 Rydberg states using purely optical detection
L. A. M. Johnson; H. O. Majeed; B. Sanguinetti; Th. Becker; B. T. H. Varcoe
2010-02-16T23:59:59.000Z
A three-step laser excitation scheme is used to make absolute frequency measurements of highly excited nF7/2 Rydberg states in 85Rb for principal quantum numbers n=33-100. This work demonstrates the first absolute frequency measurements of rubidium Rydberg levels using a purely optical detection scheme. The Rydberg states are excited in a heated Rb vapour cell and Doppler free signals are detected via purely optical means. All of the frequency measurements are made using a wavemeter which is calibrated against a GPS disciplined self-referenced optical frequency comb. We find that the measured levels have a very high frequency stability, and are especially robust to electric fields. The apparatus has allowed measurements of the states to an accuracy of 8.0MHz. The new measurements are analysed by extracting the modified Rydberg-Ritz series parameters.
Justin Albert; William Burgett; Jason Rhodes
2006-05-19T23:59:59.000Z
We propose a tunable laser-based satellite-mounted spectrophotometric and absolute flux calibration system, to be utilized by ground- and space-based telescopes. As spectrophotometric calibration may play a significant role in the accuracy of photometric redshift measurement, and photometric redshift accuracy is important for measuring dark energy using SNIa, weak gravitational lensing, and baryon oscillations, a method for reducing such uncertainties is needed. We propose to improve spectrophotometric calibration, currently obtained using standard stars, by placing a tunable laser and a wide-angle light source on a satellite by early next decade (perhaps included in the upgrade to the GPS satellite network) to improve absolute flux calibration and relative spectrophotometric calibration across the visible and near-infrared spectrum. As well as fundamental astrophysical applications, the system proposed here potentially has broad utility for defense and national security applications such as ground target illumination and space communication.
Aleksandr Fridrikson; Marina Kasatochkina
2009-04-08T23:59:59.000Z
The direct problem of the detection of the Earth's absolute gravitation potential maximum value (MGP) was solved. The inverse problem finding of the Earth maximum gravitation (where there is a maximum of gravitation field intensity and a potential function has a 'bending point') with the help of MGP was solved as well. The obtained results show that the revealed Earth maximum gravitation coincides quite strictly with the cseismic D" layer on the border of the inner and outer (liquid) core. The validity of the method of an absolute gravitation potential detection by the equal- potential velocity was proved as 'gravitation potential measurement' or 'Vs-gravity method'. The prospects of this method for detecting of low-power or distant geological objects with abnormal density and the possible earthquakes with low density was shown.
Keim, E.R.; Polak, M.L.; Owrutsky, J.C.; Coe, J.V.; Saykally, R.J. (Department of Chemistry, University of California, Berkeley, CA (USA))
1990-09-01T23:59:59.000Z
The technique of direct laser absorption spectroscopy in fast ion beams has been employed for the determination of absolute integrated band intensities ({ital S}{sup 0}{sub {ital v}}) for the {nu}{sub 3} fundamental bands of H{sub 3}O{sup +} and NH{sup +}{sub 4}. In addition, the absolute band intensities for the {nu}{sub 1} fundamental bands of HN{sup +}{sub 2} and HCO{sup +} have been remeasured. The values obtained in units of cm{sup {minus}2} atm{sup {minus}1} at STP are 1880(290) and 580(90) for the {nu}{sub 1} fundamentals of HN{sup +}{sub 2} and HCO{sup +}, respectively; and 4000(800) and 1220(190) for the {nu}{sub 3} fundamentals of H{sub 3}O{sup +} and NH{sup +}{sub 4}, respectively. Comparisons with {ital ab} {ital initio} results are presented.
H. Nunokawa; W. J. C. Teves; R. Zukanovich Funchal
2002-10-10T23:59:59.000Z
Assuming that neutrinos are Majorana particles, in a three generation framework, current and future neutrino oscillation experiments can determine six out of the nine parameters which fully describe the structure of the neutrino mass matrix. We try to clarify the interplay among the remaining parameters, the absolute neutrino mass scale and two CP violating Majorana phases, and how they can be accessed by future neutrinoless double beta ($0\
Cacho, Cephise M. [Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, 34012 Basovizza, Trieste (Italy); Photon Science Department, Science and Technology Facilities Council, Daresbury WA4 4AD (United Kingdom); Vlaic, Sergio [Dipartimento di Fisica, Universita di Trieste, via Valerio 2, 34127 Trieste (Italy); Malvestuto, Marco; Ressel, Barbara [Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, 34012 Basovizza, Trieste (Italy); Seddon, Elaine A. [Photon Science Department, Science and Technology Facilities Council, Daresbury WA4 4AD (United Kingdom); Parmigiani, Fulvio [Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, 34012 Basovizza, Trieste (Italy); Dipartimento di Fisica, Universita di Trieste, via Valerio 2, 34127 Trieste (Italy)
2009-04-15T23:59:59.000Z
Here we report the absolute characterization of a spin polarimeter by measuring the Sherman function with high precision. These results have been obtained from the analysis of the spin and angle-resolved photoemission spectra of Au(111) surface states. The measurements have been performed with a 250 kHz repetition rate Ti:sapphire amplified laser system combined with a high energy-, angle-, and spin-resolving time-of-flight electron spectrometer.
Reducing Collective Quantum State Rotation Errors with Reversible Dephasing
Kevin C. Cox; Matthew A. Norcia; Joshua M. Weiner; Justin G. Bohnet; James K. Thompson
2014-07-16T23:59:59.000Z
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of $2.1 \\times 10^5$ laser cooled and trapped $^{87}$Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-01-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst's understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Representing cognitive activities and errors in HRA trees
Gertman, D.I.
1992-05-01T23:59:59.000Z
A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst`s understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed.
Meta learning of bounds on the Bayes classifier error
Moon, Kevin R; Hero, Alfred O
2015-01-01T23:59:59.000Z
Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.
Characterization of quantum dynamics using quantum error correction
S. Omkar; R. Srikanth; S. Banerjee
2015-01-27T23:59:59.000Z
Characterizing noisy quantum processes is important to quantum computation and communication (QCC), since quantum systems are generally open. To date, all methods of characterization of quantum dynamics (CQD), typically implemented by quantum process tomography, are \\textit{off-line}, i.e., QCC and CQD are not concurrent, as they require distinct state preparations. Here we introduce a method, "quantum error correction based characterization of dynamics", in which the initial state is any element from the code space of a quantum error correcting code that can protect the state from arbitrary errors acting on the subsystem subjected to the unknown dynamics. The statistics of stabilizer measurements, with possible unitary pre-processing operations, are used to characterize the noise, while the observed syndrome can be used to correct the noisy state. Our method requires at most $2(4^n-1)$ configurations to characterize arbitrary noise acting on $n$ qubits.
Factorization of correspondence and camera error for unconstrained dense correspondence applications
Knoblauch, D; Hess-Flores, M; Duchaineau, M; Kuester, F
2009-09-29T23:59:59.000Z
A correspondence and camera error analysis for dense correspondence applications such as structure from motion is introduced. This provides error introspection, opening up the possibility of adaptively and progressively applying more expensive correspondence and camera parameter estimation methods to reduce these errors. The presented algorithm evaluates the given correspondences and camera parameters based on an error generated through simple triangulation. This triangulation is based on the given dense, non-epipolar constraint, correspondences and estimated camera parameters. This provides an error map without requiring any information about the perfect solution or making assumptions about the scene. The resulting error is a combination of correspondence and camera parameter errors. An simple, fast low/high pass filter error factorization is introduced, allowing for the separation of correspondence error and camera error. Further analysis of the resulting error maps is applied to allow efficient iterative improvement of correspondences and cameras.
Henry L. Haselgrove; Peter P. Rohde
2007-07-03T23:59:59.000Z
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected with high probability by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the quantum coherent information. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
Peak, Derek
Are you getting an error message in UniFi Plus? (suggestion...check the auto-hint line!) In most cases, Unifi Plus does not prominently display error messages; instead, the error message and processing messages Keyboard shortcuts Instructions for accessing other blocks, windows or forms from
Comment on "Optimum Quantum Error Recovery using Semidefinite Programming"
M. Reimpell; R. F. Werner; K. Audenaert
2006-06-07T23:59:59.000Z
In a recent paper ([1]=quant-ph/0606035) it is shown how the optimal recovery operation in an error correction scheme can be considered as a semidefinite program. As a possible future improvement it is noted that still better error correction might be obtained by optimizing the encoding as well. In this note we present the result of such an improvement, specifically for the four-bit correction of an amplitude damping channel considered in [1]. We get a strict improvement for almost all values of the damping parameter. The method (and the computer code) is taken from our earlier study of such correction schemes (quant-ph/0307138).
Error estimates and specification parameters for functional renormalization
Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)
2013-07-15T23:59:59.000Z
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Correctable noise of Quantum Error Correcting Codes under adaptive concatenation
Jesse Fern
2008-02-27T23:59:59.000Z
We examine the transformation of noise under a quantum error correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum fault tolerant thresholds.
Error-prevention scheme with two pairs of qubits
Chu, Shih-I; Yang, Chui-Ping; Han, Siyuan
2002-09-04T23:59:59.000Z
Ei jue ie j&5ue je i& , e iP$0,1% @6#!. The expressions for HS and HSB are as follows: HS5e0~s I z 1s II z !, *Email address: cpyang@floquet.chem.ku.edu †Email address: sichu@ku.edu ‡ Email address: han@ku.eduError-prevention scheme Chui-Ping Yang.... The sche two pairs of qubits and through error-prevention proc through a decoherence-free subspace for collective p pairs; leakage out of the encoding space due to amp addition, how to construct decoherence-free states for n discussed. DOI: 10.1103/Phys...
Laser Phase Errors in Seeded Free Electron Lasers
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-04-17T23:59:59.000Z
Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
Tradeoffs and Average-Case Equilibria in Selfish Routing Martin Hoefer
Reiterer, Harald
the expected price of anarchy of the game for various social cost functions. For total latency social cost cost in polyno- mial time. Furthermore, our analyses of the expected prices are average-case analyses, 2007 Abstract We consider the price of selfish routing in terms of tradeoffs and from an average
Reaction-time binning: A simple method for increasing the resolving power of ERP averages
Poli, Riccardo
Reaction-time binning: A simple method for increasing the resolving power of ERP averages RICCARDO-locked, response-locked, and ERP-locked averaging are effective methods for reducing artifacts in ERP analysis. However, they suffer from a magnifying-glass effect: they increase the resolution of specific ERPs
Boyer, Edmond
trajectory of the averaged system. Key words: Optimal control, Singular perturbations, occupational measures is to study singularly perturbed control systems. Firstly, we provide linearized formulation version and sufficient conditions in order to identify the optimal trajectory of the averaged system. Linear programming
Surface-based display of volume-averaged cerebellar imaging data Jrn Diedrichsen & Ewa Zotow
Diedrichsen, Jörn
Surface-based display of volume-averaged cerebellar imaging data Jörn Diedrichsen & Ewa Zotow representation of the cerebellum as a visualization tool for volume-averaged cerebellar data. Volume-based) Data projected onto a surface- based representation based on a single anatomy [2] displays single
Pipeline for the Creation of Surface-based Averaged Brain Atlases
Menzel, Randolf - Institut für Biologie
Pipeline for the Creation of Surface-based Averaged Brain Atlases Anja Kuß Hans-Christian Hege from different image modalities and experiments. In this paper we describe a standardized pipeline of individuals. The pipeline consists of the major steps imaging and preprocessing, segmentation, averaging
Cao, Wenwu
Allowed mesoscopic point group symmetries in domain average engineering of perovskite ferroelectric average engineering in proper ferroelectric systems arising from the cubic Pm3Żm symmetry perovskite4 Both solid solution systems have a perovskite structure. Poling along one of the pseudocubic axes
DISTRIBUTED POSE AVERAGING IN CAMERA NETWORKS VIA CONSENSUS ON SE(3) Roberto Tron, Rene Vidal
DISTRIBUTED POSE AVERAGING IN CAMERA NETWORKS VIA CONSENSUS ON SE(3) Roberto Tron, Ren´e Vidal distributed algorithms for esti- mating the average pose of an object viewed by a localized network of camera networks; pose estimation; consensus; optimization on manifolds. 1. INTRODUCTION Recent hardware
Ordinary kriging for on-demand average wind interpolation of in-situ wind sensor data
Middleton, Stuart E.
1 Ordinary kriging for on-demand average wind interpolation of in-situ wind sensor data Zlatko comes from wind in-situ observation stations in an area approximately 200km by 125km. We provide on-demand average wind interpolation maps. These spatial estimates can then be compared with the results of other
Volume-averaged macroscopic equation for fluid flow in moving porous media
Wang, Liang; Guo, Zhaoli; Mi, Jianchun
2014-01-01T23:59:59.000Z
Darcy's law and the Brinkman equation are two main models used for creeping fluid flows inside moving permeable particles. For these two models, the time derivative and the nonlinear convective terms of fluid velocity are neglected in the momentum equation. In this paper, a new momentum equation including these two terms are rigorously derived from the pore-scale microscopic equations by the volume-averaging method, which can reduces to Darcy's law and the Brinkman equation under creeping flow conditions. Using the lattice Boltzmann equation method, the macroscopic equations are solved for the problem of a porous circular cylinder moving along the centerline of a channel. Galilean invariance of the equations are investigated both with the intrinsic phase averaged velocity and the phase averaged velocity. The results demonstrate that the commonly used phase averaged velocity cannot serve as the superficial velocity, while the intrinsic phase averaged velocity should be chosen for porous particulate systems.
Absolute measurement of thermal noise in a resonant short-range force experiment
H. Yan; E. A. Housworth; H. O. Meyer; G. Visser; E. Weisman; J. C. Long
2014-10-23T23:59:59.000Z
Planar, double-torsional oscillators are especially suitable for short-range macroscopic force search experiments, since they can be operated at the limit of instrumental thermal noise. As a study of this limit, we report a measurement of the noise kinetic energy of a polycrystalline tungsten oscillator in thermal equilibrium at room temperature. The fluctuations of the oscillator in a high-Q torsional mode with a resonance frequency near 1 kHz are detected with capacitive transducers coupled to a sensitive differential amplifier. The electronic processing is calibrated by means of a known electrostatic force and input from a finite element model. The measured average kinetic energy is in agreement with the expected value of 1/2 kT.
Soft Error Modeling and Protection for Sequential Elements Hossein Asadi and Mehdi B. Tahoori
on system-level soft error rate. The number of clock cycles required for an error in a bistable to be propagated to system outputs is used to measure the vulnerability of bistables to soft errors. 1 Introduction, soft errors become the main reliability concern during lifetime operation of digital systems. Soft
Low-Cost Hardening of Image Processing Applications Against Soft Errors Ilia Polian1,2
Polian, Ilia
, and their hardening against soft errors becomes an issue. We propose a methodology to identify soft errors as uncritical based on their impact on the system's functionality. We call a soft error uncritical if its impact are imperceivable for the human user of the system. We focus on soft errors in the motion esti- mation subsystem
Distinguishing congestion and error losses: an ECN/ELN based scheme
Kamakshisundaram, Raguram
2001-01-01T23:59:59.000Z
error rates, like wireless links, packets are lost more due to error than due to congestion. But TCP does not differentiate between error and congestion losses and hence reduces the sending rate for losses due to error also, which unnecessarily reduces...
Error Exponent for Discrete Memoryless Multiple-Access Channels
Anastasopoulos, Achilleas
Error Exponent for Discrete Memoryless Multiple-Access Channels by Ali Nazari A dissertation Bayraktar Associate Professor Jussi Keppo #12;c Ali Nazari 2011 All Rights Reserved #12;To my parents. ii Becky Turanski, Nancy Goings, Michele Feldkamp, Ann Pace, Karen Liska and Beth Lawson for efficiently
Optimal Estimation from Relative Measurements: Error Scaling (Extended Abstract)
Hespanha, Joăo Pedro
"relative" measurement between xu and xv is available: uv = xu - xv + u,v Rk , (u, v) E V × V, (1) whereOptimal Estimation from Relative Measurements: Error Scaling (Extended Abstract) Prabir Barooah Jo~ao P. Hespanha I. ESTIMATION FROM RELATIVE MEASUREMENTS We consider the problem of estimating a number
On the error estimates for the rotational pressure-correction ...
2004-06-11T23:59:59.000Z
Dec 19, 2003 ... that may be viewed as a predictor-corrector strategy aiming at .... Since for projection methods the treatment of the nonlinear term does not ... In practice, the nonlin- .... One derives immediately from the standard PDE theory that .... Let us first write the equations that control the time increments of the errors.
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications
Polz, Martin
Automatic Error Elimination by Horizontal Code Transfer across Multiple Applications Stelios CSAIL, Cambridge, MA, USA Abstract We present Code Phage (CP), a system for automatically transferring. To the best of our knowledge, CP is the first system to automatically transfer code across multiple
Error Bounds from Extra Precise Iterative Refinement James Demmel
Li, Xiaoye Sherry
now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5 Cooperative Agreement No. ACI-9619020; NSF Grant Nos. ACI-9813362 and CCF-0444486; the DOE Grant Nos. DE-FG03
Control del Error para la Multirresoluci on Quincunx a la
Amat, Sergio
multirresoluci#19;on discreta no lineal de Harten. En los algoritmos de multirresoluci#19;on se transforma una obtiene ^ f L la cual debera de estar cerca de #22; f L . Por lo tanto, los algoritmos no deben de ser inestables. En este estudio, introduciremos algoritmos de control del error y de la estabilidad. Se obtendr
Urban Water Demand with Periodic Error Correction David R. Bell
Griffin, Ronald
them. Econometric estimates of residential demand for water abound (Dalhuisen et al. 2003Urban Water Demand with Periodic Error Correction by David R. Bell and Ronald C. Griffin February, Department of Agricultural Economics, Texas A&M University. #12;Abstract Monthly demand for publicly supplied
Error Control Based Model Reduction for Parameter Optimization of Elliptic
of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material
ADJOINT AND DEFECT ERROR BOUNDING AND CORRECTION FOR FUNCTIONAL ESTIMATES
Pierce, Niles A.
and Michael B. Giles Applied & Computational Mathematics, California Institute of Technology Computing to handle flows with shocks; numerical experiments confirm 4th order error estimates for a pressure integral of shocked quasi-1D Euler flow. Numerical results also demonstrate 4th order accuracy for the drag
RESIDUAL TYPE A POSTERIORI ERROR ESTIMATES FOR ELLIPTIC OBSTACLE PROBLEMS
Nochetto, Ricardo H.
to double obstacle problems are briefly discussed. Key words. a posteriori error estimates, residual Science Foundation under the grant No.19771080 and China National Key Project ``Large Scale Scientific\\Gamma satisfies / Ĺ¸ 0 on @ and K is the convex set of admissible displacements K := fv 2 H 1 0(\\Omega\\Gamma : v
Energy efficiency of error correction for wireless communication
Havinga, Paul J.M.
-control is an important issue for mobile computing systems. This includes energy spent in the physical radio transmission and Networking Conference 1999 [7]. #12;ENERGY EFFICIENCY OF ERROR CORRECTION FOR WIRELESS COMMUNICATIONA 2 on the energy of transmission and the energy of redundancy computation. We will show that the computational cost
Selected CRC Polynomials Can Correct Errors and Thus Reduce Retransmission
Mache, Jens
sensor networks, minimizing communication is crucial to improve energy consumption and thus lifetime Correction, Reliability, Network Protocol, Low Power Comsumption I. INTRODUCTION Error detection using Cyclic of retransmitting the whole packet - improves energy consumption and thus lifetime of wireless sensor networks
A Spline Algorithm for Modeling Cutting Errors Turning Centers
Gilsinn, David E.
. Bandy Automated Production Technology Division National Institute of Standards and Technology 100 Bureau are made up of features with profiles defined by arcs and lines. An error model for turned parts must take. In the case where there is a requirement of tangency between two features, such as a line tangent to an arc
Time reversal in thermoacoustic tomography - an error estimate
Hristova, Yulia
2008-01-01T23:59:59.000Z
The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming
Waterloo, University of
IPASS: Error Tolerant NMR Backbone Resonance Assignment by Linear Programming Babak Alipanahi1 automatically picked peaks. IPASS is proposed as a novel integer linear programming (ILP) based assignment assignment method. Although a variety of assignment approaches have been developed, none works well on noisy
Research Article Preschool Speech Error Patterns Predict Articulation
-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological Outcomes in Children With Histories of Speech Sound Disorders Jonathan L. Preston,a,b Margaret Hull disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method
Edinburgh Research Explorer Prevalence and Causes of Prescribing Errors
Hall, Christopher
of Prescribing Errors: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study: The PRescribing Outcomes for Trainee Doctors Engaged in Clinical Training (PROTECT) Study Cristi´n Ryan1 , Sarah Kingdom, 7 Health Psychology, University of Aberdeen, Aberdeen, United Kingdom, 8 Clinical Pharmacology
Development of an Expert System for Classification of Medical Errors
Kopec, Danny
in the United States. There has been considerable speculation that these figures are either overestimated published by the Institute of Medicine (IOM) indicated that between 44,000 and 98,000 unnecessary deaths per in hospitals in the IOM report, what is of importance is that the number of deaths caused by such errors
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01T23:59:59.000Z
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
Errors-in-variables problems in transient electromagnetic mineral exploration
Braslavsky, Julio H.
Errors-in-variables problems in transient electromagnetic mineral exploration K. Lau, J. H in transient electromagnetic mineral exploration. A specific sub-problem of interest in this area geological surveys, dia- mond drilling, and airborne mineral exploration. Our interest here is with ground
Improving STT-MRAM Density Through Multibit Error Correction
Sapatnekar, Sachin
. Traditional methods enhance robustness at the cost of area/energy by using larger cell sizes to improve the thermal stability of the MTJ cells. This paper employs multibit error correction with DRAM to the read operation) through TX. A key attribute of an MTJ is the notion of thermal stability. Fig. 2
Error Minimization Methods in Biproportional Apportionment Federica Ricca Andrea Scozzari
Serafini, Paolo
as an alternative to the classical axiomatic approach introduced by Balinski and Demange in 1989. We provide and in the statistical literature. A milestone theoretical setting was given by Balinski and Demange in 1989 [5, 6 a class of methods for Biproportional Apportionment characterized by an "error minimization" approach
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR
Sambridge, Malcolm
DISCRIMINATION AND CLASSIFICATION OF UXO USING MAGNETOMETRY: INVERSION AND ERROR ANALYSIS USING for the different solutions didn't even overlap. Introduction A discrimination and classification strategy ambiguity and possible remanent magnetization the recovered dipole moment is compared to a library
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller
Xuan, Dong
Flexible Error Protection for Energy Efficient Reliable Architectures Timothy Miller , Nagarjuna and Computer Engineering The Ohio State University {millerti,teodores}@cse.ohio-state.edu, nagarjun. To deal with these com- peting trends, energy-efficient solutions are needed to deal with reli- ability
Designing Automation to Reduce Operator Errors Nancy G. Leveson
Leveson, Nancy
Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of moderelated problems [SW95]. After studying accidents and incidents in the new, highly automated
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering
Kreinovich, Vladik
Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work
Data aware, Low cost Error correction for Wireless Sensor Networks
California at San Diego, University of
Data aware, Low cost Error correction for Wireless Sensor Networks Shoubhik Mukhopadhyay, Debashis challenges in adoption and deployment of wireless networked sensing applications is ensuring reliable sensor of such applications. A wireless sensor network is inherently vulnerable to different sources of unreliability
Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.
1994-01-01T23:59:59.000Z
The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL`s). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL`s which are appropriate for material processing applications, low and intermediate average power DPSSL`s are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications.
Averaged Energy Inequalities for Non-Minimally Coupled Classical Scalar Fields
Lutz W. Osterbrink
2006-12-11T23:59:59.000Z
The stress-energy tensor for the non-minimally coupled scalar field is known not to satisfy the pointwise energy conditions, even on the classical level. We show, however, that local averages of the classical stress-energy tensor satisfy certain inequalities and give bounds for averages along causal geodesics. It is shown that in vacuum background spacetimes, ANEC and AWEC are satisfied. Furthermore we use our result to show that in the classical situation we have an analogue to the so called quantum interest conjecture. These results lay the foundations for averaged energy inequalities for the quantised non-minimally coupled fields.
Kemp, Charles C. (Charles Clark), 1972-
2005-01-01T23:59:59.000Z
This thesis presents Duo, the first wearable system to autonomously learn a kinematic model of the wearer via body-mounted absolute orientation sensors and a head-mounted camera. With Duo, we demonstrate the significant ...
Leistikow, Bruce N.
Would you like an absolutely free prescription for reduced risk of numerous diseases and increased energy, happiness and life expectancy that requires no trips to the store or special equipment? What
On the Theory of Average Case Complexity Shai Ben-Davidy
Goldreich, Oded
appeared in Journal of Computer and system Sciences, Vol. 44, No. 2, April 1992, pp. 193{219. I've corrected some errors which I found while scanning, but did not proofread this version. O.G., 1997 Science Foundation (BSF), Jerusalem, Israel. xPartially supported by a Natural Sciences and Engineering
A near-IR line of Mn I as a diagnostic tool of the average magnetic energy in the solar photosphere
A. Asensio Ramos; M. J. Martinez Gonzalez; A. Lopez Ariste; J. Trujillo Bueno; M. Collados
2006-12-14T23:59:59.000Z
We report on spectropolarimetric observations of a near-IR line of Mn I located at 15262.702 A whose intensity and polarization profiles are very sensitive to the presence of hyperfine structure. A theoretical investigation of the magnetic sensitivity of this line to the magnetic field uncovers several interesting properties. The most important one is that the presence of strong Paschen-Back perturbations due to the hyperfine structure produces an intensity line profile whose shape changes according to the absolute value of the magnetic field strength. A line ratio technique is developed from the intrinsic variations of the line profile. This line ratio technique is applied to spectropolarimetric observations of the quiet solar photosphere in order to explore the probability distribution function of the magnetic field strength. Particular attention is given to the quietest area of the observed field of view, which was encircled by an enhanced network region. A detailed theoretical investigation shows that the inferred distribution yields information on the average magnetic field strength and the spatial scale at which the magnetic field is organized. A first estimation gives ~250 G for the mean field strength and a tentative value of ~0.45" for the spatial scale at which the observed magnetic field is horizontally organized.
The averaging process in permeability estimation from well-test data
Oliver, D.S. (Saudi Aramco (SA))
1990-09-01T23:59:59.000Z
Permeability estimates from the pressure derivative or the slope of the semilog plot usually are considered to be averages of some large ill-defined reservoir volume. This paper presents results of a study of the averaging process, including identification of the region of the reservoir that influences permeability estimates, and a specification of the relative contribution of the permeability of various regions to the estimate of average permeability. The diffusion equation for the pressure response of a well situated in an infinite reservoir where permeability is an arbitrary function of position was solved for the case of small variations from a mean value. Permeability estimates from the slope of the plot of pressure vs. the logarithm of drawdown time are shown to be weighted averages of the permeabilities within an inner and outer radius of investigation.
Reconstruction of ionization probabilities from spatially averaged data in N dimensions
Stroahaber, James; Kolomenskii, A; Schuessler, Hans
2010-07-06T23:59:59.000Z
We present an analytical inversion technique, which can be used to recover ionization probabilities from spatially averaged data in an N-dimensional detection scheme. The solution is given as a power series in intensity. For this reason, we call...
Dealing with uncertainty in estimating average annual flood damage for ungaged watersheds
Toneatti, Silvana Victoria
1996-01-01T23:59:59.000Z
Average annual damage (AAD) is a key central component of the hydrologic, hydraulic, and economic information developed in the evaluation of flood damage reduction plans. AAD or the expected value of annual damage, in dollars, is a...
AVERAGES ALONG POLYNOMIAL SEQUENCES IN DISCRETE NILPOTENT GROUPS: SINGULAR RADON TRANSFORMS
Magyar, Akos
AVERAGES ALONG POLYNOMIAL SEQUENCES IN DISCRETE NILPOTENT GROUPS: SINGULAR RADON TRANSFORMS can consider discrete maximal Radon transforms, which have applications to pointwise ergodic theo- rems, and discrete singular Radon transforms. In this paper we prove L2 boundedness of discrete
System average rates of U.S. investor-owned electric utilities : a statistical benchmark study
Berndt, Ernst R.
1995-01-01T23:59:59.000Z
Using multiple regression methods, we have undertaken a statistical "benchmark" study comparing system average electricity rates charged by three California utilities with 96 other US utilities over the 1984-93 time period. ...
Experiments with a time-dependent, zonally averaged, seasonal, enery balance climatic model
Thompson, Starley Lee
1977-01-01T23:59:59.000Z
EXPERIMENTS WITH A TI&E-DEPENDENT, ZONALLY AVERAGED, SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Submitted to the Graduate College of Texas ASM University in partial fulfillment of the requirement for the decree... of MASTER OF SCIENCE December 1977 Major Subject: Meteorology EXPERIMENTS WITH A TIME DEPENDENT~ ZONALLY AVERAGED~ SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Approved as to style and content by: (Chairman of Committee...
Variation in the annual average radon concentration measured in homes in Mesa County, Colorado
Rood, A.S.; George, J.L.; Langner, G.H. Jr.
1990-04-01T23:59:59.000Z
The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs.
Hoppers, Kevin Paul
2000-01-01T23:59:59.000Z
OPTIMIZING DETECTOR PLACEMENT FOR ISOLATED INTERSECTIONS BASED ON MINIMIZING AVERAGE DELAY AND NUMBER OF STOPS A Thesis by KEVIN PAUL HOPPERS Submitted to the Office of Cnaduate Studies of Texas AerM University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE May 2000 Major Subject: Civil Engineering OPTIMIZING DETECTOR PLACEMENT FOR ISOLATED INTERSECTIONS BASED ON MINIMIZING AVERAGE DELAY AND NUMBER OF STOPS A Thesis by KEVIN PAUL HOPPERS Submitted to Texas...
Average over energy effect of parity nonconservation in neutron scattering on heavy nuclei
O. P. Sushkov
1996-03-05T23:59:59.000Z
Using semiclassical approximation we consider parity nonconservation (PNC) averaged over compound resonances. We demonstrate that the result of the averaging crucially depends on the properties of residual strong nucleon-nucleon interaction. Natural way to elucidate this problem is to investigate experimentally PNC spin rotation with nonmonachromatic neutron beam: $E \\sim \\Delta E \\sim 1MeV$. Value of the effect can reach $\\psi \\sim 10^{-5}-10^{-4}$ per mean free path.
Experiments with a time-dependent, zonally averaged, seasonal, enery balance climatic model
Thompson, Starley Lee
1977-01-01T23:59:59.000Z
EXPERIMENTS WITH A TI&E-DEPENDENT, ZONALLY AVERAGED, SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Submitted to the Graduate College of Texas ASM University in partial fulfillment of the requirement for the decree... of MASTER OF SCIENCE December 1977 Major Subject: Meteorology EXPERIMENTS WITH A TIME DEPENDENT~ ZONALLY AVERAGED~ SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Approved as to style and content by: (Chairman of Committee...
Ambedkar Dukkipati; M. Narsimha Murty; Shalabh Bhatnagar
2005-05-30T23:59:59.000Z
As additivity is a characteristic property of the classical information measure, Shannon entropy, pseudo-additivity is a characteristic property of Tsallis entropy. Renyi generalized Shannon entropy by means of Kolmogorov-Nagumo averages, by imposing additivity as a constraint.In this paper we show that there exists no generalization for Tsallis entropy, by means of Kolmogorov-Nagumo averages, which preserves the pseudo-additivity.
CLEO Collaboration; G. Bonvicini; D. Cinabro M. J. Smith; P. Zhou; P. Naik; J. Rademacker; K. W. Edwards; R. A. Briere; H. Vogel; J. L. Rosner; J. P. Alexander; D. G. Cassel; R. Ehrlich; L. Gibbons; S. W. Gray; D. L. Hartill; B. K. Heltsley; D. L. Kreinick; V. E. Kuznetsov; J. R. Patterson; D. Peterson; D. Riley; A. Ryd; A. J. Sadoff; X. Shi; W. M. Sun; S. Das; J. Yelton; P. Rubin; N. Lowrey; S. Mehrabyan; M. Selen; J. Wiss; J. Libby; M. Kornicer; R. E. Mitchell; D. Besson; T. K. Pedlar; D. Cronin-Hennessy; J. Hietala; S. Dobbs; Z. Metreveli; K. K. Seth; A. Tomaradze; T. Xiao; A. Powell; C. Thomas; G. Wilkinson; D. M. Asner; G. Tatishvili; J. Y. Ge; D. H. Miller; I. P. J. Shipsey; B. Xin; G. S. Adams; J. Napolitano; K. M. Ecklund; J. Insler; H. Muramatsu; L. J. Pearson; E. H. Thorndike; M. Artuso; S. Blusk; R. Mountain; T. Skwarnicki; S. Stone; J. C. Wang; L. M. Zhang; P. U. E. Onyisi
2014-08-20T23:59:59.000Z
Utilizing the full CLEO-c data sample of 818 pb$^{-1}$ of $e^+e^-$ data taken at the $\\psi(3770)$ resonance, we update our measurements of absolute hadronic branching fractions of charged and neutral $D$ mesons. We previously reportedresults from subsets of these data. Using a double tag technique we obtain branching fractions for three $D^0$ and six $D^+$ modes, including the reference branching fractions $\\mathcal{B} (D^0\\to K^-\\pi^+)=(3.934 \\pm 0.021 \\pm 0.061)\\%$ and $\\mathcal{B} (D^+ \\to K^- \\pi^+\\pi^+)=(9.224 \\pm 0.059 \\pm 0.157)\\%$. The uncertainties are statistical and systematic, respectively. In these measurements we include the effects of final-state radiation by allowing for additional unobserved photons in the final state, and the systematic errors include our estimates of the uncertainties of these effects. Furthermore, using an independent measurement of the luminosity, we obtain the cross sections $\\sigma(e^+e^-\\to D^0\\overline{D}{}^0)=(3.607\\pm 0.017 \\pm 0.056) \\ \\mathrm{nb}$ and $\\sigma(e^+e^-\\to D^+D^-)=(2.882\\pm 0.018 \\pm 0.042) \\ \\mathrm{nb}$ at a center of mass energy, $E_\\mathrm{cm} = 3774 \\pm 1$ MeV.
Mohanty, Saraju P.
power, power fluctuation, average power and total energy are equally design constraints. In this work by the average power (energy). The increase in energy and average power consumption, increases the energy bill (ŁĄ¤§¦¨¦© ˇ ). As the energy (average power) consumption increases, it necessitates the increase in generation which in turn
Revision of the Branch Technical Position on Concentration Averaging and Encapsulation - 12510
Heath, Maurice; Kennedy, James E.; Ridge, Christianne; Lowman, Donald [U.S. NRC, Washington, DC, 20555-0001 (United States); Cochran, John [Sandia National Laboratory (United States)
2012-07-01T23:59:59.000Z
The U.S. Nuclear Regulatory Commission (NRC) regulation governing low-level waste (LLW) disposal, 'Licensing Requirements for Land Disposal of Radioactive Waste', 10 CFR Part 61, establishes a waste classification system based on the concentration of specific radionuclides contained in the waste. The regulation also states, at 10 CFR 61.55(a)(8), that, 'the concentration of a radionuclide (in waste) may be averaged over the volume of the waste, or weight of the waste if the units are expressed as nanocuries per gram'. The NRC's Branch Technical Position on Concentration Averaging and Encapsulation provides guidance on averaging radionuclide concentrations in waste under 10 CFR 61.55(a)(8) when classifying waste for disposal. In 2007, the NRC staff proposed to revise the Branch Technical Position on Concentration Averaging and Encapsulation. The Branch Technical Position on Concentration Averaging and Encapsulation is an NRC guidance document for averaging and classifying wastes under 10 CFR 61. The Branch Technical Position on Concentration Averaging and Encapsulation is used by nuclear power plants (NPPs) licensees and sealed source users, among others. In addition, three of the four U.S. LLW disposal facility operators are required to honor the Branch Technical Position on Concentration Averaging and Encapsulation as a licensing condition. In 2010, the Commission directed the staff to develop guidance regarding large scale blending of similar homogenous waste types, as described in SECY-10-0043 as part of its Branch Technical Position on Concentration Averaging and Encapsulation revision. The Commission is improving the regulatory approach used in the Branch Technical Position on Concentration Averaging and Encapsulation by moving towards a making it more risk-informed and performance-based approach, which is more consistent with the agency's regulatory policies. Among the improvements to the Branch Technical Position on Concentration Averaging and Encapsulation are more risk-informed limits for the sizes of sealed sources for safe disposal. Using more realistic intruder exposure scenarios, the suggested limits for Class B and C waste disposal of sealed sources, particularly Cs-137 and Co-60, have been increased. These suggested changes, and others in the Branch Technical Position on Concentration Averaging and Encapsulation, if adopted by Agreement States, have the potential to eliminate numerous orphan sources (i.e., sources that currently have no disposal pathway) that are now being stored. Permanent disposal of these sources, rather than temporary storage, will help reduce safety and security risks. The revised Branch Technical Position on Concentration Averaging and Encapsulation has an alternative approach section which provides flexibility to generators and processors, while also ensuring that intruder protection will be maintained. Alternative approaches provide flexibility by allowing for consideration of likelihood of intrusion, the possibility of averaging over larger volumes and allowing for disposal of large activity sources. The revision has improved the organization of the Branch Technical Position on Concentration Averaging and Encapsulation, improved its clarity, better documented the bases for positions, and made the positions more risk informed while also maintaining protection for intruder as required by 10 CFR Part 61. (authors)
On the Fourier Transform Approach to Quantum Error Control
Hari Dilip Kumar
2012-08-24T23:59:59.000Z
Quantum codes are subspaces of the state space of a quantum system that are used to protect quantum information. Some common classes of quantum codes are stabilizer (or additive) codes, non-stabilizer (or non-additive) codes obtained from stabilizer codes, and Clifford codes. These are analyzed in a framework using the Fourier transform on finite groups, the finite group in question being a subgroup of the quantum error group considered. All the classes of codes that can be obtained in this framework are explored, including codes more general than Clifford codes. The error detection properties of one of these more general classes ("direct sums of translates of Clifford codes") are characterized. Examples codes are constructed, and computer code search results presented and analysed.
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)
2010-08-24T23:59:59.000Z
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.
2013-01-01T23:59:59.000Z
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require (p) analysis time per MPI operation,more »forpprocesses. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Probabilistic growth of large entangled states with low error accumulation
Yuichiro Matsuzaki; Simon C Benjamin; Joseph Fitzsimons
2009-08-03T23:59:59.000Z
The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01T23:59:59.000Z
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
On the efficiency of nondegenerate quantum error correction codes for Pauli channels
Gunnar Bjork; Jonas Almlof; Isabel Sainz
2009-05-19T23:59:59.000Z
We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.
Scaling behavior of discretization errors in renormalization and improvement constants
Bhattacharya, T; Lee, W; Sharpe, S R; Bhattacharya, Tanmoy; Gupta, Rajan; Lee, Weonjong; Sharpe, Stephen R.
2006-01-01T23:59:59.000Z
Non-perturbative results for improvement and renormalization constants needed for on-shell and off-shell O(a) improvement of bilinear operators composed of Wilson fermions are presented. The calculations have been done in the quenched approximation at beta=6.0, 6.2 and 6.4. To quantify residual discretization errors we compare our data with results from other non-perturbative calculations and with one-loop perturbation theory.
Error message recording and reporting in the SLC control system
Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.
1985-04-01T23:59:59.000Z
Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29T23:59:59.000Z
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
Borkovits, Tamás; Kiss, László L; Király, Amanda; Forgács-Dajka, Emese; Bíró, Imre Barna; Bedding, Timothy R; Bryson, Stephen T; Huber, Daniel; Szabó, Róbert
2012-01-01T23:59:59.000Z
HD 181068 is the brighter of the two known triply eclipsing hierarchical triple stars in the Kepler field. It has been continuously observed for more than 2 years with the Kepler space telescope. Of the nine quarters of the data, three have been obtained in short-cadence mode, that is one point per 58.9 s. Here we analyse this unique dataset to determine absolute physical parameters (most importantly the masses and radii) and full orbital configuration using a sophisticated novel approach. We measure eclipse timing variations (ETVs), which are then combined with the single-lined radial velocity measurements to yield masses in a manner equivalent to double-lined spectroscopic binaries. We have also developed a new light curve synthesis code that is used to model the triple, mutual eclipses and the effects of the changing tidal field on the stellar surface and the relativistic Doppler-beaming. By combining the stellar masses from the ETV study with the simultaneous light curve analysis we determine the absolute...
McIntyre, Justin I.; Cooper, Matthew W.; Ely, James H.; Haas, Derek A.; Schrom, Brian T.
2012-09-21T23:59:59.000Z
Efforts to calibrate the absolute efficiency of gas cell radiations detectors have utilized a number of methodologies which allow adequate calibration but are time consuming and prone to a host of difficult-to-determine uncertainties. A method that extrapolates the total source strength from the measured beta and gamma gated beta coincidence signal was developed in the 1960’s and 1970’s. It has become clear that it is possible to achieve more consistent results across a range of isotopes and a range of activities using this method. Even more compelling is the ease with which this process can be used on routine samples to determine the total activity present in the detector. Additionally, recent advances in the generation of isotopically pure radioxenon samples of Xe-131m, Xe-133, and Xe-135 have allowed these measurement techniques to achieve much better results than would have been possible before when using mixed isotopic radioxenon source. This paper will discuss the beta/gamma absolute detection efficiency technique that utilizes several of the beta-gamma decay signatures to more precisely determine the beta and gamma efficiencies. It will than compare these results with other methods using pure sources of Xe-133, Xe-131m, and Xe-135 and a Xe-133/Xe-133m mix.
Veligdan, James T. (Manorville, NY)
1993-01-01T23:59:59.000Z
Atmospheric effects on sighting measurements are compensated for by adjusting any sighting measurements using a correction factor that does not depend on atmospheric state conditions such as temperature, pressure, density or turbulence. The correction factor is accurately determined using a precisely measured physical separation between two color components of a light beam (or beams) that has been generated using either a two-color laser or two lasers that project different colored beams. The physical separation is precisely measured by fixing the position of a short beam pulse and measuring the physical separation between the two fixed-in-position components of the beam. This precisely measured physical separation is then used in a relationship that includes the indexes of refraction for each of the two colors of the laser beam in the atmosphere through which the beam is projected, thereby to determine the absolute displacement of one wavelength component of the laser beam from a straight line of sight for that projected component of the beam. This absolute displacement is useful to correct optical measurements, such as those developed in surveying measurements that are made in a test area that includes the same dispersion effects of the atmosphere on the optical measurements. The means and method of the invention are suitable for use with either single-ended systems or a double-ended systems.
Monelli, M; Bono, G; Ferraro, I; Iannicola, G; Fiorentino, G; Arcidiacono, C; Massari, D; Boutsia, K; Briguglio, R; Busoni, L; Carini, R; Close, L; Cresci, G; Esposito, S; Fini, L; Fumana, M; Guerra, J C; Hill, J; Kulesa, C; Mannucci, F; McCarthy, D; Pinna, E; Puglisi, A; Quiros-Pacheco, F; Ragazzoni, R; Riccardi, A; Skemer, A; Xompero, M
2015-01-01T23:59:59.000Z
We present deep near-infrared (NIR) J, Ks photometry of the old, metal-poor Galactic globular cluster M\\,15 obtained with images collected with the LUCI1 and PISCES cameras available at the Large Binocular Telescope (LBT). We show how the use of First Light Adaptive Optics system coupled with the (FLAO) PISCES camera allows us to improve the limiting magnitude by ~2 mag in Ks. By analyzing archival HST data, we demonstrate that the quality of the LBT/PISCES color magnitude diagram is fully comparable with analogous space-based data. The smaller field of view is balanced by the shorter exposure time required to reach a similar photometric limit. We investigated the absolute age of M\\,15 by means of two methods: i) by determining the age from the position of the main sequence turn-off; and ii) by the magnitude difference between the MSTO and the well-defined knee detected along the faint portion of the MS. We derive consistent values of the absolute age of M15, that is 12.9+-2.6 Gyr and 13.3+-1.1 Gyr, respectiv...
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS National FuelYancey County, NorthDiesel3, 2013TWO Washington,4 Average Square Footage of
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS National FuelYancey County, NorthDiesel3, 2013TWO Washington,4 Average Square Footage of5
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS National FuelYancey County, NorthDiesel3, 2013TWO Washington,4 Average Square Footage of56
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS National FuelYancey County, NorthDiesel3, 2013TWO Washington,4 Average Square Footage of569
Chaotic motion at the emergence of the time averaged energy decay
Cesar Manchein; Jane Rosa; Marcus W. Beims
2009-05-29T23:59:59.000Z
A system plus environment conservative model is used to characterize the nonlinear dynamics when the time averaged energy for the system particle starts to decay. The system particle dynamics is regular for low values of the $N$ environment oscillators and becomes chaotic in the interval $13\\le N\\le15$, where the system time averaged energy starts to decay. To characterize the nonlinear motion we estimate the Lyapunov exponent (LE), determine the power spectrum and the Kaplan-Yorke dimension. For much larger values of $N$ the energy of the system particle is completely transferred to the environment and the corresponding LEs decrease. Numerical evidences show the connection between the variations of the {\\it amplitude} of the particles energy time oscillation with the time averaged energy decay and trapped trajectories.
Reconstruction of ionization probabilities from spatially averaged data in N dimensions
Strohaber, J.; Kolomenskii, A. A.; Schuessler, H. A. [Department of Physics, Texas A and M University, College Station, Texas 77843-4242 (United States)
2010-07-15T23:59:59.000Z
We present an analytical inversion technique, which can be used to recover ionization probabilities from spatially averaged data in an N-dimensional detection scheme. The solution is given as a power series in intensity. For this reason, we call this technique a multiphoton expansion (MPE). The MPE formalism was verified with an exactly solvable inversion problem in two dimensions, and probabilities in the postsaturation region, where the intensity-selective scanning approach breaks down, were recovered. In three dimensions, ionization probabilities of Xe were successfully recovered with MPE from simulated (using the Ammosov-Delone-Krainov tunneling theory) ion yields. Finally, we tested our approach with intensity-resolved benzene-ion yields, which show a resonant multiphoton ionization process. By applying MPE to this data (which were artificially averaged), the resonant structure was recovered, which suggests that the resonance in benzene may have been observed in spatially averaged data taken elsewhere.
Cropper, Clark [University of Tennessee, Knoxville (UTK); Perfect, Edmund [ORNL; van den Berg, Dr. Elmer [University of Tennessee, Knoxville (UTK); Mayes, Melanie [ORNL
2010-01-01T23:59:59.000Z
The capillary pressure-saturation function can be determined from centrifuge drainage experiments. In soil physics, the data resulting from such experiments are usually analyzed by the 'averaging method.' In this approach, average relative saturation, , is expressed as a function of average capillary pressure, <{psi}>, i.e., (<{psi}>). In contrast, the capillary pressure-saturation function at a physical point, i.e., S({psi}), has been extracted from similar experiments in petrophysics using the 'integral method.' The purpose of this study was to introduce the integral method applied to centrifuge experiments to a soil physics audience and to compare S({psi}) and (<{psi}>) functions, as parameterized by the Brooks-Corey and van Genuchten equations, for 18 samples drawn from a range of porous media (i.e., Berea sandstone, glass beads, and Hanford sediments). Steady-state centrifuge experiments were performed on preconsolidated samples with a URC-628 Ultra-Rock Core centrifuge. The angular velocity and outflow data sets were then analyzed using both the averaging and integral methods. The results show that the averaging method smoothes out the drainage process, yielding less steep capillary pressure-saturation functions relative to the corresponding point-based curves. Maximum deviations in saturation between the two methods ranged from 0.08 to 0.28 and generally occurred at low suctions. These discrepancies can lead to inaccurate predictions of other hydraulic properties such as the relative permeability function. Therefore, we strongly recommend use of the integral method instead of the averaging method when determining the capillary pressure-saturation function by steady-state centrifugation. This method can be successfully implemented using either the van Genuchten or Brooks-Corey functions, although the latter provides a more physically precise description of air entry at a physical point.
HEALTH POLICY AND SYSTEMS Nurses' Practice Environments, Error Interception Practices,
Xie, Minge
7,000 inpatient deaths per year in the United States (US). On average, a U.S. hospital patient of Nursing, Rutgers the State University of New Jersey, Newark, NJ 2 Associate Professor, University, Rutgers College of Nursing, Rutgers the State University of New Jersey, Newark, NJ 4 Professor
"2013 Total Electric Industry- Average Retail Price (cents/kWh)"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010 2011Average8a. AppliancesFileAverage Retail Price
"Table A25 Average Prices of Selected Purchased Energy Sources by Census"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010Electric Sales, Revenue, and AverageE2.1.0. Total4. Total Average
"Table A25. Average Prices of Selected Purchased Energy Sources by Census"
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onsource History View NewUS NationalStocks 2009 2010Electric Sales, Revenue, and AverageE2.1.0. Total4. Total Average.
Note on an integral expression for the average lifetime of the bound state in 2D
Thorsten Prustel; Martin Meier-Schellersheim
2012-10-04T23:59:59.000Z
Recently, an exact Green's function of the diffusion equation for a pair of spherical interacting particles in two dimensions subject to a backreaction boundary condition was used to derive an exact expression for the average lifetime of the bound state. Here, we show that the corresponding divergent integral may be considered as the formal limit of a Stieltjes transform. Upon analytically calculating the Stieltjes transform one can obtain an exact expression for the finite part of the divergent integral and hence for the average lifetime.
V-109: Google Chrome WebKit Type Confusion Error Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
9: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code V-109: Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets...
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute...
Recompile if your codes run into MPICH error after the maintenance...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Recompile if your codes run into MPICH errors after the maintenance on 6252014 Recompile if your codes run into MPICH error after the maintenance on 6252014 June 27, 2014 (0...
Design techniques for graph-based error-correcting codes and their applications
Lan, Ching Fu
2006-04-12T23:59:59.000Z
-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused...
Simulations of error in quantum adiabatic computations of random 2-SAT instances
Gill, Jay S. (Jay Singh)
2006-01-01T23:59:59.000Z
This thesis presents a series of simulations of quantum computations using the adiabatic algorithm. The goal is to explore the effect of error, using a perturbative approach that models 1-local errors to the Hamiltonian ...
T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service
Broader source: Energy.gov [DOE]
A remote user can cause the backend server to remain in an error state until the retry timeout expires.
McReynolds, W.L. (Bonneville Power Administration, Vancouver, WA (US)); Badley, D.E. (N.W. Power Pool, Coordinating Office, Portland, OR (US))
1991-08-01T23:59:59.000Z
This paper describes an automatic generation control (AGC) system that simultaneously reduces time error and accumulated inadvertent interchange energy in interconnected power system. This method is automatic time error and accumulated inadvertent interchange reduction (AIIR). With this method control areas help correct the system time error when doing so also tends to correct accumulated inadvertent interchange. Thus in one step accumulated inadvertent interchange and system time error are corrected.
Optimum decoding of TCM in the presence of phase errors
Han, Jae Choong
1990-01-01T23:59:59.000Z
discussed. Our approach is to assume that intersymbol interference has been effectively removed by the equalizer while the phase tracking scheme has partially removed the phase jitter, in which case the output of the equalizer will have a slowly varying.... The DAL [I] used the decision at the output ol' the Viterbi decoder to demodulate the local c&arrier. The performance degradation of coded 8-PSK when disturbed by recovered carrier phase error and jitter is investigatecl in i'Gi, in which simulation...