U.S. Energy Information Administration (EIA) Indexed Site
Errors of Nonobservation Finally, several potential sources of nonsampling error and bias result from errors of nonobservation. The 1994 MECS represents, in terms of sampling...
Superconvergence of the derivative patch recovery technique and a posteriorii error estimation
Zhang, Z.; Zhu, J.Z.
1995-12-31
The derivative patch recovery technique developed by Zienkiewicz and Zhu for the finite element method is analyzed. It is shown that, for one dimensional problems and two dimensional problems using tensor product elements, the patch recovery technique yields superconvergence recovery for the derivatives. Consequently, the error estimator based on the recovered derivative is asymptotically exact.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Error and fault abstractions Mattan Erez UT Austin *Who should care about faults and errors? *Ideally, only system cares about masked faults? - Assuming application bugs are not...
U.S. Energy Information Administration (EIA) Indexed Site
Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate...
Gasoline and Diesel Fuel Update (EIA)
Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
script writes out the header html. We are sorry to report that an error has occurred. Internal identifier for doc type not found. Return to RevCom | Return to Web Portal Need help? Email Technical Support. This site managed by the Office of Management / US Department of Energy Directives | Regulations | Technical Standards | Reference Library | DOE Forms | About Us | Privacy & Security Notice This script breaks up the email address to avoid spam
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. nemgnierrorhandler(): a transaction error ...
runtime error message: "readControlMsg: System returned error...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd"...
Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. NERSC and Cray are working on this issue. ...
Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
not be a problem. Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. Last edited: 2015-01-16 ...
X:\\L6046\\Data_Publication\\Pma\\current\\ventura\\pma.vp
U.S. Energy Information Administration (EIA) Indexed Site
a complete enumeration has the same nonsampling errors as the sample survey. The sampling error, or standard error of the estimate, is a measure of the variability among the...
U.S. Energy Information Administration (EIA) Indexed Site
complete enumera- tion has the same nonsampling errors as the sample survey. The sampling error, or standard error of the estimate, is a measure of the variability among the...
X:\\Data_Publication\\Pma\\current\\ventura\\pma00.vp
U.S. Energy Information Administration (EIA) Indexed Site
a complete enumeration has the same nonsampling errors as the sample survey. The sampling error, or standard error of the estimate, is a measure of the variability among the...
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Error 404 - Document not found
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or...
Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system resubmit the job error with width parameters to aprun user Make sure #PBS -l mppwidth value matches aprun -n value new values for
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Confidence limits and their errors
Rajendran Raja
2002-03-22
Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.
Error studies for SNS Linac. Part 1: Transverse errors
Crandall, K.R.
1998-12-31
The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).
Error 404 - Document not found
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or send e-mail to WWW Administrator
Trouble Shooting and Error Messages
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system One of the compute nodes assigned to the job failed. Resubmit the job PtlNIInit failed : PTL_NOT_REGISTERED user The executable is from
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
error Sorry, there is no www.netl.doe.gov web page that matches your request. It may be possible that you typed the address incorrectly. Connect to National Energy Technology...
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Error and uncertainty in Raman thermal conductivity measurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Field errors in hybrid insertion devices
Schlueter, R.D.
1995-02-01
Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.
Clover: Compiler directed lightweight soft error resilience
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Group representations, error bases and quantum codes
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
Linux Kernel Error Detection and Correction
Energy Science and Technology Software Center (OSTI)
2007-04-11
EDAC-utils consists fo a library and set of utilities for retrieving statistics from the Linux Kernel Error Detection and Correction (EDAC) drivers.
runtime error message: "readControlMsg: System returned error Connection
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
timed out on TCP socket fd" readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd" June 30, 2015 Symptom User jobs with sinlge or multiple apruns in a batch script may get this run time error: "readControlMsg: System returned error Connection timed out on TCP socket fd". This problem is intermittent, sometimes resubmit works. This error
Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)
Hodge, B. M.; Milligan, M.
2011-07-01
This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.
Error recovery to enable error-free message transfer between nodes of a computer network
Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.
2016-01-26
An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.
Quantum error-correcting codes and devices
Gottesman, Daniel
2000-10-03
A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.
Evaluating operating system vulnerability to memory errors.
Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian
2012-05-01
Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.
Neutron multiplication error in TRU waste measurements
Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Superdense coding interleaved with forward error correction
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Laser Phase Errors in Seeded FELs
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-03-28
Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
Intel C++ compiler error: stl_iterator_base_types.h
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
C++ compiler error: stliteratorbasetypes.h Intel C++ compiler error: stliteratorbasetypes.h December 7, 2015 by Scott French Because the system-supplied version of GCC is...
Error estimates for fission neutron outputs (Conference) | SciTech...
Office of Scientific and Technical Information (OSTI)
Error estimates for fission neutron outputs Citation Details In-Document Search Title: Error estimates for fission neutron outputs You are accessing a document from the...
Internal compiler error for function pointer with identically...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott...
V-235: Cisco Mobility Services Engine Configuration Error Lets...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
5: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login ...
Error Estimation for Fault Tolerance in Numerical Integration...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Error Estimation for Fault Tolerance in Numerical Integration Solvers Event Sponsor: ... In numerical integration solvers, approximation error can be estimated at a low cost. We ...
A posteriori error analysis of parameterized linear systems using...
Office of Scientific and Technical Information (OSTI)
Journal Article: A posteriori error analysis of parameterized linear systems using spectral methods. Citation Details In-Document Search Title: A posteriori error analysis of ...
Table 1b. Relative Standard Errors for Effective, Occupied, and...
U.S. Energy Information Administration (EIA) Indexed Site
b.Relative Standard Errors Table 1b. Relative Standard Errors for Effective Occupied, and Vacant Square Footage, 1992 Building Characteristics All Buildings (thousand) Total...
Accounting for Model Error in the Calibration of Physical Models...
Office of Scientific and Technical Information (OSTI)
Accounting for Model Error in the Calibration of Physical Models. Citation Details In-Document Search Title: Accounting for Model Error in the Calibration of Physical Models. ...
Table 2b. Relative Standard Errors for Electricity Consumption...
U.S. Energy Information Administration (EIA) Indexed Site
2b. Relative Standard Errors for Electricity Table 2b. Relative Standard Errors for Electricity Consumption and Electricity Intensities, per Square Foot, Specific to Occupied and...
Error Analysis in Nuclear Density Functional Theory (Journal...
Office of Scientific and Technical Information (OSTI)
Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...
Error Analysis in Nuclear Density Functional Theory (Journal...
Office of Scientific and Technical Information (OSTI)
Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...
Raman Thermometry: Comparing Methods to Minimize Error. (Conference...
Office of Scientific and Technical Information (OSTI)
Raman Thermometry: Comparing Methods to Minimize Error. Citation Details In-Document Search Title: Raman Thermometry: Comparing Methods to Minimize Error. Abstract not provided....
Shared dosimetry error in epidemiological dose-response analyses
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.
Shared dosimetry error in epidemiological dose-response analyses
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Error field penetration and locking to the backward propagating wave
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Analysis of Solar Two Heliostat Tracking Error Sources
Jones, S.A.; Stone, K.W.
1999-01-28
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)
Hodge, B. M.; Ela, E.; Milligan, M.
2011-10-01
This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.
WIPP Weatherization: Common Errors and Innovative Solutions Presentation
Broader source: Energy.gov [DOE]
This presentation contains information on WIPP Weatherization: Common Errors and Innovative Solutions.
Errors in response calculations for beams
Wada, H.; Wurburton, G.B.
1985-05-01
When the finite element method is used to idealize a structure, its dynamic response can be determined from the governing matrix equation by the normal mode method or by one of the many approximate direct integration methods. In either method the approximate data of the finite element idealization are used, but further assumptions are introduced by the direct integration scheme. It is the purpose of this paper to study these errors for a simple structure. The transient flexural vibrations of a uniform cantilever beam, which is subjected to a transverse force at the free end, are determined by the Laplace transform method. Comparable responses are obtained for a finite element idealization of the beam, using the normal mode and Newmark average acceleration methods; the errors associated with the approximate methods are studied. If accuracy has priority and the quantity of data is small, the normal mode method is recommended; however, if the quantity of data is large, the Newmark method is useful.
Detecting Soft Errors in Stencil based Computations
Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.
2015-05-06
Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.
Redundancy and Error Resilience in Boolean Networks
Peixoto, Tiago P.
2010-01-29
We consider the effect of noise in sparse Boolean networks with redundant functions. We show that they always exhibit a nonzero error level, and the dynamics undergoes a phase transition from nonergodicity to ergodicity, as a function of noise, after which the system is no longer capable of preserving a memory of its initial state. We obtain upper bounds on the critical value of noise for networks of different sparsity.
Systematic errors in long baseline oscillation experiments
Harris, Deborah A.; /Fermilab
2006-02-01
This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.
Improving Memory Error Handling Using Linux
Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.
2014-07-25
As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.
Common Errors and Innovative Solutions Transcript | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Common Errors and Innovative Solutions Transcript Common Errors and Innovative Solutions Transcript An example of case studies, mainly by showing photos of errors and good examples, then discussing the purpose of the home energy professional guidelines and certification. There may be more examples of what not to do only because these were good learning opportunities. common_errors_innovative_solutions.doc (41.5 KB) More Documents & Publications WIPP Weatherization: Common Errors and
Uncertainty estimates for derivatives and intercepts
Clark, E.L.
1994-09-01
Straight line least squares fits of experimental data are widely used in the analysis of test results to provide derivatives and intercepts. A method for evaluating the uncertainty in these parameters is described. The method utilizes conventional least squares results and is applicable to experiments where the independent variable is controlled, but not necessarily free of error. A Monte Carlo verification of the method is given.
Stability and error analysis of nodal expansion method for convection-diffusion equation
Deng, Z.; Rizwan-Uddin; Li, F.; Sun, Y.
2012-07-01
The development, and stability and error analyses of nodal expansion method (NEM) for one dimensional steady-state convection diffusion equation is presented. Following the traditional procedure to develop NEM, the discrete formulation of the convection-diffusion equation, which is similar to the standard finite difference scheme, is derived. The method of discrete perturbation analysis is applied to this discrete form to study the stability of the NEM. The scheme based on the NEM is found to be stable for local Peclet number less than 4.644. A maximum principle is proved for the NEM scheme, followed by an error analysis carried out by applying the Maximum principle together with a carefully constructed comparison function. The scheme for the convection diffusion equation is of second-order. Numerical experiments are carried and the results agree with the conclusions of the stability and error analyses. (authors)
Biswas, Dipankar Panda, Siddhartha
2014-04-07
Experimental capacitance–voltage (C-V) profiling of semiconductor heterojunctions and quantum wells has remained ever important and relevant. The apparent carrier distributions (ACDs) thus obtained reveal the carrier depletions, carrier peaks and their positions, in and around the quantum structures. Inevitable errors, encountered in such measurements, are the deviations of the peak concentrations of the ACDs and their positions, from the actual carrier peaks obtained from quantum mechanical computations with the fundamental parameters. In spite of the very wide use of the C-V method, comprehensive discussions on the qualitative and quantitative nature of the errors remain wanting. The errors are dependent on the fundamental parameters, the temperature of measurements, the Debye length, and the series resistance. In this paper, the errors have been studied with doping concentration, band offset, and temperature. From this study, a rough estimate may be drawn about the error. It is seen that the error in the position of the ACD peak decreases at higher doping, higher band offset, and lower temperature, whereas the error in the peak concentration changes in a strange fashion. A completely new method is introduced, for derivation of the carrier profiles from C-V measurements on quantum structures to minimize errors which are inevitable in the conventional formulation.
Error Reduction for Weigh-In-Motion
Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T
2009-01-01
Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).
Error Reduction in Weigh-In-Motion
Energy Science and Technology Software Center (OSTI)
2007-09-21
Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bounding and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with lessmore » effort (elimination of redundant weighing)« less
Resolved: "error while loading shared libraries: libalpslli.so...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
"error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on...
MPI errors from cray-mpich/7.3.0
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
MPI errors from cray-mpich7.3.0 MPI errors from cray-mpich7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping...
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.
Locked modes and magnetic field errors in MST
Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.
1992-06-01
In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
A technique for human error analysis (ATHEANA)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
Hess-Flores, M
2011-11-10
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in
Polaractivation for classical zero-error capacity of qudit channels
Gyongyosi, Laszlo; Imre, Sandor
2014-12-04
We introduce a new phenomenon for zero-error transmission of classical information over quantum channels that initially were not able for zero-error classical communication. The effect is called polaractivation, and the result is similar to the superactivation effect. We use the Choi-Jamiolkowski isomorphism and the Schmidt-theorem to prove the polaractivation of classical zero-error capacity and define the polaractivator channel coding scheme.
Internal compiler error for function pointer with identically named
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
arguments Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott French, NERSC USG Status: Bug 21435 reported to PGI For pgcc versions after 12.x (up through 12.9 is fine, but 13.x and 14.x are not), you may observe an internal compiler error associated with function pointer prototypes when named arguments are used. Specifically, if a function pointer type is defined
WIPP Weatherization: Common Errors and Innovative Solutions Presentati...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
More Documents & Publications Common Errors and Innovative Solutions Transcript Building ... America Best Practices Series: Volume 12. Energy Renovations-Insulation: A Guide for ...
Output-Based Error Estimation and Adaptation for Uncertainty...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Output-Based Error Estimation and Adaptation for Uncertainty Quantification Isaac M. Asher and Krzysztof J. Fidkowski University of Michigan US National Congress on Computational...
Platform-Independent Method for Detecting Errors in Metagenomic...
Office of Scientific and Technical Information (OSTI)
Title: Platform-Independent Method for Detecting Errors in Metagenomic Sequencing Data: DRISEE Authors: Keegan, K. P. ; Trimble, W. L. ; Wilkening, J. ; Wilke, A. ; Harrison, T. ; ...
Detecting and correcting hard errors in a memory array
Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.
2015-11-19
Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.
Info-Gap Analysis of Truncation Errors in Numerical Simulations...
Office of Scientific and Technical Information (OSTI)
Title: Info-Gap Analysis of Truncation Errors in Numerical Simulations. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. ...
Info-Gap Analysis of Numerical Truncation Errors. (Conference...
Office of Scientific and Technical Information (OSTI)
Title: Info-Gap Analysis of Numerical Truncation Errors. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. Publication ...
Table 6b. Relative Standard Errors for Total Electricity Consumption...
U.S. Energy Information Administration (EIA) Indexed Site
b. Relative Standard Errors for Total Electricity Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Electricity (thousand) Total...
Accounting for Model Error in the Calibration of Physical Models
Office of Scientific and Technical Information (OSTI)
... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...
Handling Model Error in the Calibration of Physical Models
Office of Scientific and Technical Information (OSTI)
... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...
Confirmation of standard error analysis techniques applied to...
Office of Scientific and Technical Information (OSTI)
reported parameter errors are not reliable in many EXAFS studies in the literature. ... Country of Publication: United States Language: English Subject: 75; ABSORPTION; ACCURACY; ...
U-058: Apache Struts Conversion Error OGNL Expression Injection...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
in Apache Struts. A remote user can execute arbitrary commands on the target system. PLATFORM: Apache Struts 2.x ABSTRACT: Apache Struts Conversion Error OGNL Expression...
Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.
2013-06-01
This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.
Nonlocal reactive transport with physical and chemical heterogeneity: Localization errors
Cushman, J.H.; Hu, B.X.; Deng, F.W.
1995-09-01
The origin of nonlocality in {open_quotes}macroscale{close_quotes} models for subsurface chemical transport is illustrated. It is argued that media that are either nonperiodic (e.g., media with evolving heterogeneity) or periodic viewed on a scale wherein a unit cell is discernible must display some nonlocality in the mean. A metaphysical argument suggests that owing to the scarcity of information on natural scales of heterogeneity and on scales of observation associated with an instrument window, constitutive theories for the mean concentration should at the outset of any modeling effort always be considered nonlocal. The intuitive appeal to nonlocality is reinforced with an analytical derivation of the constitutive theory for a conservative tracer without appeal to any mathematical approximations. Comparisons are made between the fully nonlocal (FNL), nonlocal in time (NLT), and fully localized (FL) theories. For conservative transport, there is little difference between the first-order FL and FNL models for spatial moments up to and including the third. However, for conservative transport the first-order NLT model differs significantly from the FNL model in the third spatial moments. For reactive transport, all spatial moments differ between the FNL and FL models. The second transverse-horizontal and third longitudinal-horizontal moments for the NLT model differ from the FNL model. These results suggest that localized first-order transport models for conservative tracers are reasonable if only lower-order moments are desired. However, when the chemical reacts with its environment, the localization approximation can lead to significant error in all moments, and a FNL model will in general be required for accurate simulation. 18 refs., 9 figs., 1 tab.
Error localization in RHIC by fitting difference orbits
Liu C.; Minty, M.; Ptitsyn, V.
2012-05-20
The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.
U.S. Energy Information Administration (EIA) Indexed Site
74-1988 For Methodology Concerning the Derived Estimates Total Consumption of Offsite-Produced Energy for Heat and Power by Industry Group, 1974-1988 Total Energy *** Electricity...
Catastrophic photometric redshift errors: Weak-lensing survey requirements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymore » the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs – zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-01-13
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Link error from craype/2.5.0 Link error from craype/2.5.0 January 13, 2016 by Woo-Sun Yang If you build a code using a file called 'configure' with craype/2.5.0, Cray build-tools assumes that you want to use the 'native' link mode (e.g., gcc defaults to dynamic linking), by adding '-Wl,-rpath=/opt/intel/composer_xe_2015/compiler/lib/intel64 -lintlc'. This creates a link error: /usr/bin/ld: cannot find -lintlc A temporary work around is to swap the default craype (2.5.0) with an older or newer
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.
2012-09-01
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Intel C++ compiler error: stl_iterator_base_types.h
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
C++ compiler error: stl_iterator_base_types.h Intel C++ compiler error: stl_iterator_base_types.h December 7, 2015 by Scott French Because the system-supplied version of GCC is relatively old (4.3.4) it is common practice to load the gcc module on our Cray systems when C++11 support is required under the Intel C++ compilers. While this works as expected under the GCC 4.8 and 4.9 series compilers, the 5.x series can cause Intel C++ compile-time errors similar to the following:
An Optimized Autoregressive Forecast Error Generator for Wind and Load Uncertainty Study
De Mello, Phillip; Lu, Ning; Makarov, Yuri V.
2011-01-17
This paper presents a first-order autoregressive algorithm to generate real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast errors. The methodology aims at producing random wind and load forecast time series reflecting the autocorrelation and cross-correlation of historical forecast data sets. Five statistical characteristics are considered: the means, standard deviations, autocorrelations, and cross-correlations. A stochastic optimization routine is developed to minimize the differences between the statistical characteristics of the generated time series and the targeted ones. An optimal set of parameters are obtained and used to produce the RT, HA, and DA forecasts in due order of succession. This method, although implemented as the first-order regressive random forecast error generator, can be extended to higher-order. Results show that the methodology produces random series with desired statistics derived from real data sets provided by the California Independent System Operator (CAISO). The wind and load forecast error generator is currently used in wind integration studies to generate wind and load inputs for stochastic planning processes. Our future studies will focus on reflecting the diurnal and seasonal differences of the wind and load statistics and implementing them in the random forecast generator.
Rongone, K.G.
1994-12-01
Various applications of the Fredholm integral equation appear in different fields of study. An application of particular interest to the Air Force arises in determination of target loading from nuclear effects simulations. Current techniques first unfold the incident spectrum then determine target loading; resulting spectrum and loading are assumed exact. This study investigates the feasibility of a new method, through-fold, for directly determining defensible error bounds on target loading. Through-fold uses a priori information to define input data and represents target response with a linear combination of instrument responses plus a remainder to derive a quadratic expression for exact target loading. This study uses a simplified, linear version of the quadratic expression. Through-fold feasibility is tested by comparing error bounds based on three target loading functions. The three test cases include an exact linear combination of instrument responses, the same combination plus a positive remainder, and the same combination plus a negative remainder. Total error bounds reduced from 100% to 35% in cases number l and number 2. In case number 3 error bound was reduced to 48%. These results indicate that through-fold has promise as a predictor of error bounds on target loading.
Zanolin, M.; Vitale, S.; Makris, N.
2010-06-15
In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.
Servo control booster system for minimizing following error
Wise, William L.
1985-01-01
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Error and tolerance studies for the SSC Linac
Raparia, D.; Chang, Chu Rui; Guy, F.; Hurd, J.W.; Funk, W.; Crandall, K.R.
1993-05-01
This paper summarizes error and tolerance studies for the SSC Linac. These studies also include higher-order multipoles. The codes used in these simulations are PARMTEQ, PARMILA, CCLDYN, PARTRACE, and CCLTRACE.
Advisory on the reporting error in the combined propane stocks...
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
Advisory on the reporting error in the combined propane stocks for PADDs 4 and 5 Release Date: June 12, 2013 The U.S. Energy Information Administration issued the following...
Quantification of the effects of dependence on human error probabiliti...
Office of Scientific and Technical Information (OSTI)
In estimating the probabilities of human error in the performance of a series of tasks in a nuclear power plant, the situation-specific characteristics of the series must be ...
Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint
Hodge, B. M.; Milligan, M.
2011-03-01
In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W.; Jordan, Jay D.; Kim, Theodore J.
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Visio-Error&OmissionNoClouds.vsd
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Error/Omission Process Process Owner: Department Managers, Corporate Projects and Facilities Projects February 7, 2008 KEY Responsibilities *A/E - Architectural/Engineering Firm *SCR - Sandia Contracting Representative *SDR - Sandia Delegated Representative *E&OB - Errors & Omissions Board * PM - Project Manager * REQ - Requester Facilities Projects Line Item Projects Review Design Findings and Begin Discovery PM Cost Impact? Yes Cost Impact <3% of ICAA? Yes Yes Take Out of Project
A Possible Calorimetric Error in Heavy Water Electrolysis on Platinum
Shanahan, K.L.
2001-03-16
A systematic error in mass flow calorimetry calibration procedures potentially capable of explaining most positive excess power measurements is described. Data recently interpreted as providing evidence of the Pons-Fleischmann effect with a platinum cathode are reinterpreted with the opposite conclusion. This indicates it is premature to conclude platinum displays a Pons and Fleischmann effect, and places the requirement to evaluate the error's magnitude on all mass flow calorimetric experiments.
WIPP Field Practices: Common Errors and Innovative Solutions | Department
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
of Energy WIPP Field Practices: Common Errors and Innovative Solutions WIPP Field Practices: Common Errors and Innovative Solutions What to do when approaching an unfamiliar house for weatherization, with hidden air leakage and a multitude of mysteries? This webinar focuses on the Dos and Don'ts of WIPP weatherization, and is guaranteed to be an hour well spent looking over photographs that show detail and perspective on air sealing, blocking, venting, and other weatherization measures. This
Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.
1991-01-01
The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Compiler-Assisted Detection of Transient Memory Errors
Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-06-09
The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.
Monte Carlo analysis of localization errors in magnetoencephalography
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
Economic penalties of problems and errors in solar energy systems
Raman, K.; Sparkes, H.R.
1983-01-01
Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.
Reducing collective quantum state rotation errors with reversible dephasing
Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.
2014-12-29
We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21?dB in the context of collective population measurements of the spin states of an ensemble of 2.110{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.
Pushing schedule derivation method
Henriquez, B.
1996-12-31
The development of a Pushing Schedule Derivation Method has allowed the company to sustain the maximum production rate at CSH`s Coke Oven Battery, in spite of having single set oven machinery with a high failure index as well as a heat top tendency. The stated method provides for scheduled downtime of up to two hours for machinery maintenance purposes, periods of empty ovens for decarbonization and production loss recovery capability, while observing lower limits and uniformity of coking time.
When soft controls get slippery: User interfaces and human error
Stubler, W.F.; O`Hara, J.M.
1998-12-01
Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety.
Scalable error correction in distributed ion trap computers
Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.
2006-11-15
A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment.
Laser Phase Errors in Seeded Free Electron Lasers
Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC
2012-04-17
Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.
MPI errors from cray-mpich/7.3.0
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
MPI errors from cray-mpich/7.3.0 MPI errors from cray-mpich/7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping buffers in MPI collectives may cause some MPI applications that use overlapping buffers to fail at runtime. As an example, one of the routines affected is MPI_ALLGATHER. There are several possible fixes. The cleanest one is to specify MPI_IN_PLACE instead of the address of the send buffer for cases where sendbuf and
Finite Bandwidth Related Errors in Noise Parameter Determination of PHEMTs
Wiatr, Wojciech
2005-08-25
We analyze errors in the determination of the four noise parameters due to finite measurement bandwidth and the delay time in the source circuit. The errors are especially large when characterizing low-noise microwave transistors at low microwave frequencies. They result from the spectral noise density variation across the measuring receiver band, due to resonant interaction of the highly mismatched transistor input with the source termination. We show also effects of virtual de-correlation of transistor's noise waves due to finite delay time at the input.
JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned
Frank Marhauser
2011-09-01
Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.
V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
ISC BIND RUNTIMECHECK Error Lets Remote Users Deny Service Against Recursive Resolvers V-172: ISC BIND RUNTIMECHECK Error Lets Remote Users Deny Service Against Recursive...
The contour method cutting assumption: error minimization and correction
Prime, Michael B; Kastengren, Alan L
2010-01-01
The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.
Contributions to Human Errors and Breaches in National Security Applications.
Pond, D. J.; Houghton, F. K.; Gilmore, W. E.
2002-01-01
Los Alamos National Laboratory has recognized that security infractions are often the consequence of various types of human errors (e.g., mistakes, lapses, slips) and/or breaches (i.e., deliberate deviations from policies or required procedures with no intention to bring about an adverse security consequence) and therefore has established an error reduction program based in part on the techniques used to mitigate hazard and accident potentials. One cornerstone of this program, definition of the situational and personal factors that increase the likelihood of employee errors and breaches, is detailed here. This information can be used retrospectively (as in accident investigations) to support and guide inquiries into security incidents or prospectively (as in hazard assessments) to guide efforts to reduce the likelihood of error/incident occurrence. Both approaches provide the foundation for targeted interventions to reduce the influence of these factors and for the formation of subsequent 'lessons learned.' Overall security is enhanced not only by reducing the inadvertent releases of classified information but also by reducing the security and safeguards resources devoted to them, thereby allowing these resources to be concentrated on acts of malevolence.
Error Detection and Correction LDMS Plugin Version 1.0
Shoga, Kathleen; Allan, Ben
2015-11-02
Sandia's Lightweight Distributed Metric Service (LDMS) is a data collection and transport system used at Livermore Computing to gather performance data across the center. While Sandia has a set of plugins available, they do not include all the data we need to capture. The ECAC plugin that we have developed enables collection of the Error Detection and Correction (EDAC) counters.
Shape error analysis for reflective nano focusing optics
Modi, Mohammed H.; Idir, Mourad
2010-06-23
Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.
The role of variation, error, and complexity in manufacturing defects
Hinckley, C.M.; Barkan, P.
1994-03-01
Variation in component properties and dimensions is a widely recognized factor in product defects which can be quantified and controlled by Statistical Process Control methodologies. Our studies have shown, however, that traditional statistical methods are ineffective in characterizing and controlling defects caused by error. The distinction between error and variation becomes increasingly important as the target defect rates approach extremely low values. Motorola data substantiates our thesis that defect rates in the range of several parts per million can only be achieved when traditional methods for controlling variation are combined with methods that specifically focus on eliminating defects due to error. Complexity in the product design, manufacturing processes, or assembly increases the likelihood of defects due to both variation and error. Thus complexity is also a root cause of defects. Until now, the absence of a sound correlation between defects and complexity has obscured the importance of this relationship. We have shown that assembly complexity can be quantified using Design for Assembly (DFA) analysis. High levels of correlation have been found between our complexity measures and defect data covering tens of millions of assembly operations in two widely different industries. The availability of an easily determined measure of complexity, combined with these correlations, permits rapid estimation of the relative defect rates for alternate design concepts. This should prove to be a powerful tool since it can guide design improvement at an early stage when concepts are most readily modified.
Servo control booster system for minimizing following error
Wise, W.L.
1979-07-26
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F.
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
Error field and magnetic diagnostic modeling for W7-X
Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.
2014-07-01
The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
Kalet, A; Phillips, M; Gennari, J
2014-06-01
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the
Bound on quantum computation time: Quantum error correction in a critical environment
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2010-08-15
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Hugger, J.
1995-12-31
When the finite element solution of a variational problem possesses certain super convergence properties, it is possible very inexpensively to obtain a correction term providing an additional order of approximation of the solution. The correction can be used for error estimation locally or globally in whatever norm is preferred, or if no error estimation is wanted it can be used for postprocessing of the solution to improve the quality. In this paper such a correction term is described for the general case of n dimensional, linear or nonlinear problems. Computational evidence of the performance in one space dimension is given with special attention to the effects of the appearance of singularities and zeros of derivatives in the exact solution.
Jahan, Kauser
2015-03-31
One of the most promising fuel alternatives is algae biodiesel. Algae reproduce quickly, produce oils more efficiently than crop plants, and require relatively few nutrients for growth. These nutrients can potentially be derived from inexpensive waste sources such as flue gas and wastewater, providing a mutual benefit of helping to mitigate carbon dioxide waste. Algae can also be grown on land unsuitable for agricultural purposes, eliminating competition with food sources. This project focused on cultivating select algae species under various environmental conditions to optimize oil yield. Membrane studies were also conducted to transfer carbon di-oxide more efficiently. An LCA study was also conducted to investigate the energy intensive steps in algae cultivation.
Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report
Reports and Publications (EIA)
2016-01-01
This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require 𝒪( p ) analysis timemore » per MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Comparison of Wind Power and Load Forecasting Error Distributions: Preprint
Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.
2012-07-01
The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M.; Abercrombie, Robert K.
2010-08-24
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
Some aspects of statistical modeling of human-error probability
Prairie, R. R.
1982-01-01
Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element.
Posters The Impacts of Data Error and Model Resolution
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
7 Posters The Impacts of Data Error and Model Resolution on the Result of Variational Data Assimilation S. Yang and Q. Xu Cooperative Institute of Mesoscale Meteorological Studies University of Oklahoma Norman, Oklahoma Introduction The representativeness and accuracy of the measurements or estimates of the lateral boundary fluxes and surface fluxes are crucial for the single-column model and budget studies of climatic variables over Atmospheric Radiation Measurement (ARM) sites. Since the
Runtime Detection of C-Style Errors in UPC Code
Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.
Sawant, A
2015-06-15
Purpose: Respiratory correlated 4DCT images are generated under the assumption of a regular breathing cycle. This study evaluates the error in 4DCT-based target position estimation in the presence of irregular respiratory motion. Methods: A custom-made programmable externally-and internally-deformable lung motion phantom was placed inside the CT bore. An abdominal pressure belt was placed around the phantom to mimic clinical 4DCT acquisitio and the motion platform was programmed with a sinusoidal (±10mm, 10 cycles per minute) motion trace and 7 motion traces recorded from lung cancer patients. The same setup and motion trajectories were repeated in the linac room and kV fluoroscopic images were acquired using the on-board imager. Positions of 4 internal markers segmented from the 4DCT volumes were overlaid upon the motion trajectories derived from the fluoroscopic time series to calculate the difference between estimated (4DCT) and “actual” (kV fluoro) positions. Results: With a sinusoidal trace, absolute errors of the 4DCT estimated markers positions vary between 0.78mm and 5.4mm and RMS errors are between 0.38mm to 1.7mm. With irregular patient traces, absolute errors of the 4DCT estimated markers positions increased significantly by 100 to 200 percent, while the corresponding RMS error values have much smaller changes. Significant mismatches were frequently found at peak-inhale or peak-exhale phase. Conclusion: As expected, under conditions of well-behaved, periodic sinusoidal motion, the 4DCT yielded much better estimation of marker positions. When an actual patient trace is used 4DCT-derived positions showed significant mismatches with the fluoroscopic trajectories, indicating the potential for geometric and therefore dosimetric errors in the presence of cycle-to-cycle respiratory variations.
Unconventional Rotor Power Response to Yaw Error Variations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Schreck, S. J.; Schepers, J. G.
2014-12-16
Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.
The Impact of Soil Sampling Errors on Variable Rate Fertilization
R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soils characteristics. Most often, spatial variability in the soils fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soils fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted
Error-field penetration in reversed magnetic shear configurations
Wang, H. H.; Wang, Z. X.; Wang, X. Q. [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China)] [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, X. G. [School of Physics, Peking University, Beijing 100871 (China)] [School of Physics, Peking University, Beijing 100871 (China)
2013-06-15
Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.
Derived Annual Estimates of Manufacturing Energy Consumption...
U.S. Energy Information Administration (EIA) Indexed Site
> Derived Annual Estimates - Executive Summary Derived Annual Estimates of Manufacturing Energy Consumption, 1974-1988 Figure showing Derived Estimates Executive Summary This...
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
FlipSphere: A Software-based DRAM Error Detection and Correction...
Office of Scientific and Technical Information (OSTI)
FlipSphere: A Software-based DRAM Error Detection and Correction Library for HPC. Citation Details In-Document Search Title: FlipSphere: A Software-based DRAM Error Detection and ...
V-194: Citrix XenServer Memory Management Error Lets Local Administrat...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
XenServer Memory Management Error Lets Local Administrative Users on the Guest Gain Access on the Host V-194: Citrix XenServer Memory Management Error Lets Local Administrative...
Resolved: "error while loading shared libraries: libalpslli.so.0" with
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
serial codes on login nodes "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes December 13, 2013 by Helen He Symptom: Dynamic executables built with compiler wrappers running directly on the external login nodes are getting the following error message: % ftn -dynamic -o testf testf.f % ./testf ./testf: error while loading shared
T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service
Broader source: Energy.gov [DOE]
A remote user can cause the backend server to remain in an error state until the retry timeout expires.
Coordinated joint motion control system with position error correction
Danko, George L.
2016-04-05
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Coordinated joint motion control system with position error correction
Danko, George
2011-11-22
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.
Friese, Daniel H.
2015-09-07
This addendum shows the detailed derivation of the fundamental equations for two-photon circular dichroism which are given in a very condensed form in the original publication [I. Tinoco, J. Chem. Phys. 62, 1006 (1975)]. In addition, some minor errors are corrected and some of the derivations in the original publication are commented.
National Nuclear Security Administration (NNSA)
Robert C. Jones, Colleen M. Beck, and Barbara A. Holz Division of Earth and Ecosystem Sciences Cultural Resources Technical Report No.102 Desert Research Institute Las Vegas, ...
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Spectral characteristics of background error covariance and multiscale data assimilation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; Vogelmann, Andrew M.
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
In vivo enzyme activity in inborn errors of metabolism
Thompson, G.N.; Walter, J.H.; Leonard, J.V.; Halliday, D. )
1990-08-01
Low-dose continuous infusions of (2H5)phenylalanine, (1-13C)propionate, and (1-13C)leucine were used to quantitate phenylalanine hydroxylation in phenylketonuria (PKU, four subjects), propionate oxidation in methylmalonic acidaemia (MMA, four subjects), and propionic acidaemia (PA, four subjects) and leucine oxidation in maple syrup urine disease (MSUD, four subjects). In vivo enzyme activity in PKU, MMA, and PA subjects was similar to or in excess of that in adult controls (range of phenylalanine hydroxylation in PKU, 3.7 to 6.5 mumol/kg/h, control 3.2 to 7.9, n = 7; propionate oxidation in MMA, 15.2 to 64.8 mumol/kg/h, and in PA, 11.1 to 36.0, control 5.1 to 19.0, n = 5). By contrast, in vivo leucine oxidation was undetectable in three of the four MSUD subjects (less than 0.5 mumol/kg/h) and negligible in the remaining subject (2 mumol/kg/h, control 10.4 to 15.7, n = 6). These results suggest that significant substrate removal can be achieved in some inborn metabolic errors either through stimulation of residual enzyme activity in defective enzyme systems or by activation of alternate metabolic pathways. Both possibilities almost certainly depend on gross elevation of substrate concentrations. By contrast, only minimal in vivo oxidation of leucine appears possible in MSUD.
SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error
Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D
2014-06-01
Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.
Sigman, Michael E.; Dindal, Amy B.
2003-11-11
Described is a method for producing copolymerized sol-gel derived sorbent particles for the production of copolymerized sol-gel derived sorbent material. The method for producing copolymerized sol-gel derived sorbent particles comprises adding a basic solution to an aqueous metal alkoxide mixture for a pH.ltoreq.8 to hydrolyze the metal alkoxides. Then, allowing the mixture to react at room temperature for a precalculated period of time for the mixture to undergo an increased in viscosity to obtain a desired pore size and surface area. The copolymerized mixture is then added to an immiscible, nonpolar solvent that has been heated to a sufficient temperature wherein the copolymerized mixture forms a solid upon the addition. The solid is recovered from the mixture, and is ready for use in an active sampling trap or activated for use in a passive sampling trap.
Ginting, Victor
2014-03-15
it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.
Magnetic cellulose-derivative structures
Walsh, M.A.; Morris, R.S.
1986-09-16
Structures to serve as selective magnetic sorbents are formed by dissolving a cellulose derivative such as cellulose triacetate in a solvent containing magnetic particles. The resulting solution is sprayed as a fine mist into a chamber containing a liquid coagulant such as n-hexane in which the cellulose derivative is insoluble but in which the coagulant is soluble or miscible. On contact with the coagulant, the mist forms free-flowing porous magnetic microspheric structures. These structures act as containers for the ion-selective or organic-selective sorption agent of choice. Some sorption agents can be incorporated during the manufacture of the structure. 3 figs.
Magnetic cellulose-derivative structures
Walsh, Myles A.; Morris, Robert S.
1986-09-16
Structures to serve as selective magnetic sorbents are formed by dissolving a cellulose derivative such as cellulose triacetate in a solvent containing magnetic particles. The resulting solution is sprayed as a fine mist into a chamber containing a liquid coagulant such as n-hexane in which the cellulose derivative is insoluble but in which the coagulant is soluble or miscible. On contact with the coagulant, the mist forms free-flowing porous magnetic microspheric structures. These structures act as containers for the ion-selective or organic-selective sorption agent of choice. Some sorbtion agents can be incorporated during the manufacture of the structure.
SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors
Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I
2014-06-01
Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Li, Zhijin; Cheng, Xiaoping; Gustafson, William I.; Vogelmann, Andrew M.
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
The impact of response measurement error on the analysis of designed experiments
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J.; Kramer, William R.
1993-01-01
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service Against Recursive Resolvers
Broader source: Energy.gov [DOE]
A defect exists which allows an attacker to crash a BIND 9 recursive resolver with a RUNTIME_CHECK error in resolver.c
Table 4b. Relative Standard Errors for Total Fuel Oil Consumption...
Gasoline and Diesel Fuel Update (EIA)
4b. Relative Standard Errors for Total Fuel Oil Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Fuel Oil (thousand) Total Fuel Oil...
Redox Chemistry of Anthraquinone Derivatives Via Simulations...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
August 27, 2014, Research Highlights Redox Chemistry of Anthraquinone Derivatives Via ... S. Assary, Investigation of the Redox Chemistry of Anthraquinone Derivatives Using ...
Shirasaki, Masato; Yoshida, Naoki
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ?1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of ?w {sub 0} ? 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density ?{sub m0}=0.256{sub 0.046}{sup 0.054}.
Compilation error with cray-petsc/3.6.1.0
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Compilation error with cray-petsc3.6.1.0 Compilation error with cray-petsc3.6.1.0 January 5, 2016 The current default cray-petsc module, cray-petsc3.6.1.0, does not work with...
The cce/8.3.0 C++ compiler may run into a linking error on Edison
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
The cce8.3.0 C++ compiler may run into a linking error on Edison The cce8.3.0 C++ compiler may run into a linking error on Edison July 1, 2014 You may run into the following...
Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow
Shadday, Martin A., Jr.
1997-03-20
The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated.
A Case for Soft Error Detection and Correction in Computational Chemistry
van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Almasi, Gheorghe [Ardsley, NY; Blumrich, Matthias Augustin [Ridgefield, CT; Chen, Dong [Croton-On-Hudson, NY; Coteus, Paul [Yorktown, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E. [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk I. [Ossining, NY; Singh, Sarabjeet [Mississauga, CA; Steinmacher-Burow, Burkhard D. [Wernau, DE; Takken, Todd [Brewster, NY; Vranas, Pavlos [Bedford Hills, NY
2008-06-03
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.
Binder enhanced refuse derived fuel
Daugherty, Kenneth E.; Venables, Barney J.; Ohlsson, Oscar O.
1996-01-01
A refuse derived fuel (RDF) pellet having about 11% or more particulate calcium hydroxide which is utilized in a combustionable mixture. The pellets are used in a particulate fuel bring a mixture of 10% or more, on a heat equivalent basis, of the RDF pellet which contains calcium hydroxide as a binder, with 50% or more, on a heat equivalent basis, of a sulphur containing coal. Combustion of the mixture is effective to produce an effluent gas from the combustion zone having a reduced SO.sub.2 and polycyclic aromatic hydrocarbon content of effluent gas from similar combustion materials not containing the calcium hydroxide.
Modern Palliative Radiation Treatment: Do Complexity and Workload Contribute to Medical Errors?
D'Souza, Neil; Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario ; Holden, Lori; Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario ; Robson, Sheila; Mah, Kathy; Di Prospero, Lisa; Wong, C. Shun; Chow, Edward; Spayne, Jacqueline; Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario
2012-09-01
Purpose: To examine whether treatment workload and complexity associated with palliative radiation therapy contribute to medical errors. Methods and Materials: In the setting of a large academic health sciences center, patient scheduling and record and verification systems were used to identify patients starting radiation therapy. All records of radiation treatment courses delivered during a 3-month period were retrieved and divided into radical and palliative intent. 'Same day consultation, planning and treatment' was used as a proxy for workload and 'previous treatment' and 'multiple sites' as surrogates for complexity. In addition, all planning and treatment discrepancies (errors and 'near-misses') recorded during the same time frame were reviewed and analyzed. Results: There were 365 new patients treated with 485 courses of palliative radiation therapy. Of those patients, 128 (35%) were same-day consultation, simulation, and treatment patients; 166 (45%) patients had previous treatment; and 94 (26%) patients had treatment to multiple sites. Four near-misses and 4 errors occurred during the audit period, giving an error per course rate of 0.82%. In comparison, there were 10 near-misses and 5 errors associated with 1100 courses of radical treatment during the audit period. This translated into an error rate of 0.45% per course. An association was found between workload and complexity and increased palliative therapy error rates. Conclusions: Increased complexity and workload may have an impact on palliative radiation treatment discrepancies. This information may help guide the necessary recommendations for process improvement for patients who require palliative radiation therapy.
Fractional charge and spin errors in self-consistent Greens function theory
Phillips, Jordan J. Kananenka, Alexei A.; Zgid, Dominika
2015-05-21
We examine fractional charge and spin errors in self-consistent Greens function theory within a second-order approximation (GF2). For GF2, it is known that the summation of diagrams resulting from the self-consistent solution of the Dyson equation removes the divergences pathological to second-order Mller-Plesset (MP2) theory for strong correlations. In the language often used in density functional theory contexts, this means GF2 has a greatly reduced fractional spin error relative to MP2. The natural question then is what effect, if any, does the Dyson summation have on the fractional charge error in GF2? To this end, we generalize our previous implementation of GF2 to open-shell systems and analyze its fractional spin and charge errors. We find that like MP2, GF2 possesses only a very small fractional charge error, and consequently minimal many electron self-interaction error. This shows that GF2 improves on the critical failings of MP2, but without altering the positive features that make it desirable. Furthermore, we find that GF2 has both less fractional charge and fractional spin errors than typical hybrid density functionals as well as random phase approximation with exchange.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; Chain, Patrick S. G.
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Evans, Suzanne B.; Yu, James B.; Chagpar, Anees
2012-10-01
Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses: 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.
CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.
2012-07-15
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Execute Arbitrary Code | Department of Energy 5: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code January 28, 2011 - 7:21am Addthis PROBLEM: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code. PLATFORM: RealPlayer 14.0.1 and prior versions ABSTRACT: A vulnerability was reported in RealPlayer. A remote user can
V-228: RealPlayer Buffer Overflow and Memory Corruption Error...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
V-228: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote Users Execute ... Lets Remote Users Execute Arbitrary Code V-049: RealPlayer Buffer Overflow and Invalid ...
"Show Preview" button is not working; gives error | OpenEI Community
"Show Preview" button is not working; gives error Home > Groups > Utility Rate Submitted by Ewilson on 3 January, 2013 - 09:52 1 answer Points: 0 Eric, thanks for reporting this. I...
A Compact Code for Simulations of Quantum Error Correction in Classical Computers
Nyman, Peter
2009-03-10
This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.
V-109: Google Chrome WebKit Type Confusion Error Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code PLATFORM: Google Chrome prior to 25.0.1364.160 ABSTRACT: A vulnerability was reported in...
Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas
E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team
2010-05-19
The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.
Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report
Weekly Natural Gas Storage Report (EIA)
U.S. Energy Information Administration | Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report 1 February 2016 Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report The U.S. Energy Information Administration (EIA) collects and publishes natural gas storage information on a monthly and weekly basis. The Form EIA-191, Monthly Underground Natural Gas Storage Report, is a census survey that collects field-level
V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Login Anonymously | Department of Energy 5: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously September 5, 2013 - 12:33am Addthis PROBLEM: A vulnerability was reported in Cisco Mobility Services Engine. A remote user can login anonymously. PLATFORM: Cisco Mobility Services Engine ABSTRACT: A vulnerability in Cisco Mobility Services Engine could allow an
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Quantifying the Effect of Lidar Turbulence Error on Wind Power Prediction
Newman, Jennifer F.; Clifton, Andrew
2016-01-01
Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST
Total-derivative supersymmetry breaking
Haba, Naoyuki; Uekusa, Nobuhiro
2010-05-15
On an interval compactification in supersymmetric theory, boundary conditions for bulk fields must be treated carefully. If they are taken arbitrarily following the requirement that a theory is supersymmetric, the conditions could give redundant constraints on the theory. We construct a supersymmetric action integral on an interval by introducing brane interactions with which total-derivative terms under the supersymmetry transformation become zero due to a cancellation. The variational principle leads equations of motion and also boundary conditions for bulk fields, which determine boundary values of bulk fields. By estimating mass spectrum, spontaneous supersymmetry breaking in this simple setup can be realized in a new framework. This supersymmetry breaking does not induce a massless R axion, which is favorable for phenomenology. It is worth noting that fermions in hyper-multiplet, gauge bosons, and the fifth-dimensional component of gauge bosons can have zero-modes (while the other components are all massive as Kaluza-Klein modes), which fits the gauge-Higgs unification scenarios.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.
Wang, S; Chao, C; Chang, J
2014-06-01
Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as a detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect
Notes on power of normality tests of error terms in regression models
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.
Gross error detection and stage efficiency estimation in a separation process
Serth, R.W.; Srikanth, B. . Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. . Dept. of Chemical and Process Engineering)
1993-10-01
Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Jakeman, J.D. Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Pion production uncertainties We compare the existing pion production data from HARP2 to a Sanford-Wang parameterization function of the differential cross section3: d...
Biomass Derivatives Competitive with Heating Oil Costs.
Biomass Derivatives Competitive with Heating Oil Costs Transportation fuel Heat or electricity * Data are from literature, except heating oil is adjusted from 2011 winter average * ...
DERIVATION OF STOCHASTIC ACCELERATION MODEL CHARACTERISTICS FOR...
Office of Scientific and Technical Information (OSTI)
FOR SOLAR FLARES FROM RHESSI HARD X-RAY OBSERVATIONS Citation Details In-Document Search Title: DERIVATION OF STOCHASTIC ACCELERATION MODEL CHARACTERISTICS FOR SOLAR FLARES ...
Tetrahydroquinoline Derivatives as Potent and Selective Factor...
Office of Scientific and Technical Information (OSTI)
as Potent and Selective Factor XIa Inhibitors Citation Details In-Document Search Title: Tetrahydroquinoline Derivatives as Potent and Selective Factor XIa Inhibitors Authors: ...
Proceedings of refuse-derived fuel (RDF)
Saltiel, C. )
1991-01-01
This book contains proceedings of Refuse-Derived Fuel (RDF)-Quality. Standards and Processing. Topics covered include: An Overview of RDF Processing Systems: Current Status, Design Features, and Future Trends. The Impact of Recycling and Pre-Combustion Processing of Municipal Solid Waste on Fuel Properties and Steam Combustion. The Changing Role of Standards in the Marketing of RDF. Refuse Derived Fuel Quality Requirements for Firing in Utility, Industrial or Dedicated Boilers. Refuse-Derived Fuel Moisture Effects on Boiler Performance and Operability. Refuse Derived Fuels: Technology, Processing, Quality and Combustion Experiences.
HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD
Harold S. Blackman; David I. Gertman; Ronald L. Boring
2008-09-01
This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.
Fade-resistant forward error correction method for free-space optical communications systems
Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.
2007-10-02
Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.
Hodge, B. M.; Lew, D.; Milligan, M.
2013-01-01
Load forecasting in the day-ahead timescale is a critical aspect of power system operations that is used in the unit commitment process. It is also an important factor in renewable energy integration studies, where the combination of load and wind or solar forecasting techniques create the net load uncertainty that must be managed by the economic dispatch process or with suitable reserves. An understanding of that load forecasting errors that may be expected in this process can lead to better decisions about the amount of reserves necessary to compensate errors. In this work, we performed a statistical analysis of the day-ahead (and two-day-ahead) load forecasting errors observed in two independent system operators for a one-year period. Comparisons were made with the normal distribution commonly assumed in power system operation simulations used for renewable power integration studies. Further analysis identified time periods when the load is more likely to be under- or overforecast.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G.; Chen, Dong; Coteus, Paul W.; Flynn, William T.; Marcella, James A.; Takken, Todd; Trager, Barry M.; Winograd, Shmuel
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Scheme for precise correction of orbit variation caused by dipole error field of insertion device
Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.
2005-05-15
We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Parallel pulse processing and data acquisition for high speed, low error flow cytometry
Engh, G.J. van den; Stokdijk, W.
1992-09-22
A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.
Parallel pulse processing and data acquisition for high speed, low error flow cytometry
van den Engh, Gerrit J.; Stokdijk, Willem
1992-01-01
A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.
V-228: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Users Execute Arbitrary Code | Department of Energy 8: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote Users Execute Arbitrary Code V-228: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote Users Execute Arbitrary Code August 27, 2013 - 6:00am Addthis PROBLEM: Two vulnerabilities were reported in RealPlayer PLATFORM: RealPlayer 16.0.2.32 and prior ABSTRACT: A remote user can cause arbitrary code to be executed on the target user's system REFERENCE LINKS:
runtime error message: "apsched: request exceeds max nodes, alloc"
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
apsched: request exceeds max nodes, alloc" runtime error message: "apsched: request exceeds max nodes, alloc" September 12, 2014 Symptom: User jobs with single or multiple apruns in a batch script may get this runtime error. "apsched: request exceeds max nodes, alloc". This problem is intermittent, started in April, then mid July, and again since late August. Status: This problem is identified as a problem when Torque/Moab batch scheduler becomes out of sync with the
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Deciphering the genetic regulatory code using an inverse error control coding framework.
Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul
2005-03-01
We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
ADIFOR: Fortran source translation for efficient derivatives
Bischof, C.; Corliss, G.; Griewank, A.; Hovland, P. ); Carle, A. . Center for Research on Parallel Computation)
1992-01-01
The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f: R{sup n} {yields} R{sup m}. Both the accuracy and the computational requirements of the derivative computation are usually of critical importance for the robustness and speed of the numerical method. ADIFOR (Automatic Differentiation In FORtran) is a source translation tool implemented using the data abstractions and program analysis capabilities of the ParaScope Parallel Programming Environment. ADIFOR accepts arbitrary Fortran-77 code defining the computation of a function and writes portable Fortran-77 code for the computation of its derivatives. In contrast to previous approaches, ADIFOR views automatic differentiation as a process of source translation that exploits computational context to reduce the cost of derivative computations. Experimental results show that ADIFOR can handle real-life codes, providing exact derivatives with a running time that is competitive with the standard divided-difference approximations of derivatives and which may perform orders of magnitude faster than divided-differences in cases. The computational scientist using ADIFOR is freed from worrying about the accurate and efficient computation of derivatives, even for complicated functions,'' and hence, is able to concentrate on the more important issues of algorithm design or system modeling. 35 refs.
Growing attraction of refuse-derived fuels
Singh, R.
1981-09-08
A review of Dr. Andrew Porteous' book, Refuse Derived Fuels is presented. The escalating price of fossil fuel, particularily oil, together with the high cost of handling and transporting refuse makes the idea of refuse-derived fuel production an attractive and economic proposition. Refuse-derived fuel production is discussed and the various manufacturing processes in the UK and the USA are described. The pyrolysis of refuse for the production of gas, oil or heat and the production of methane and ethyl alcohol or other possibilities for refuse conversion.
Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
SCM Forcing Data Derived from NWP Analyses
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Jakob, Christian
2008-01-15
Forcing data, suitable for use with single column models (SCMs) and cloud resolving models (CRMs), have been derived from NWP analyses for the ARM (Atmospheric Radiation Measurement) Tropical Western Pacific (TWP) sites of Manus Island and Nauru.
Tax Credit for Forest Derived Biomass
Broader source: Energy.gov [DOE]
Forest-derived biomass includes tree tops, limbs, needles, leaves, and other woody debris leftover from activities such as timber harvesting, forest thinning, fire suppression, or forest health m...
Casimir Energy Associated With Fractional Derivative Field
Lim, S. C.
2007-04-28
Casimir energy associated with fractional derivative scalar massless field at zero and positive temperature can be obtained using the regularization based on generalized Riemann zeta function of Epstein-Hurwitz type.
Reduction of the pulse spike-cut error in Fourier-deconvolved lidar profiles
Stoyanov, D.V.; Gurdev, L.L.; Dreischuh, T.N.
1996-08-01
A simple approach is analyzed and applied to the National Oceanic and Atmospheric Administration (NOAA) Doppler lidar data to reduce the error in Fourier-deconvolved lidar profiles that is caused by spike-cut uncertainty in the laser pulse shape, i.e., uncertainty of the type of not well-recorded (cut, missed) pulse spikes. Such a type of uncertainty is intrinsic to the case of TE (TEA) CO{sub 2} laser transmitters. This approach requires only an estimate of the spike area to be known. The result from the analytical estimation of error reduction is in agreement with the results from the NOAA lidar data processing and from computer simulation. {copyright} {ital 1996 Optical Society of America.}
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
Effect of Field Errors in Muon Collider IR Magnets on Beam Dynamics
Alexahin, Y.; Gianfelice-Wendt, E.; Kapin, V.V.; /Fermilab
2012-05-01
In order to achieve peak luminosity of a Muon Collider (MC) in the 10{sup 35} cm{sup -2}s{sup -1} range very small values of beta-function at the interaction point (IP) are necessary ({beta}* {le} 1 cm) while the distance from IP to the first quadrupole can not be made shorter than {approx}6 m as dictated by the necessity of detector protection from backgrounds. In the result the beta-function at the final focus quadrupoles can reach 100 km making beam dynamics very sensitive to all kind of errors. In the present report we consider the effects on momentum acceptance and dynamic aperture of multipole field errors in the body of IR dipoles as well as of fringe-fields in both dipoles and quadrupoles in the ase of 1.5 TeV (c.o.m.) MC. Analysis shows these effects to be strong but correctable with dedicated multipole correctors.
Calculation of the Johann error for spherically bent x-ray imaging crystal spectrometers
Wang, E.; Beiersdorfer, P.; Gu, M.; Bitter, M.; Delgado-Aparicio, L.; Hill, K. W.; Reinke, M.; Rice, J. E.; Podpaly, Y.
2010-10-15
New x-ray imaging crystal spectrometers, currently operating on Alcator C-Mod, NSTX, EAST, and KSTAR, record spectral lines of highly charged ions, such as Ar{sup 16+}, from multiple sightlines to obtain profiles of ion temperature and of toroidal plasma rotation velocity from Doppler measurements. In the present work, we describe a new data analysis routine, which accounts for the specific geometry of the sightlines of a curved-crystal spectrometer and includes corrections for the Johann error to facilitate the tomographic inversion. Such corrections are important to distinguish velocity induced Doppler shifts from instrumental line shifts caused by the Johann error. The importance of this correction is demonstrated using data from Alcator C-Mod.
Density-functional errors in ionization potential with increasing system size
Whittleton, Sarah R.; Sosa Vazquez, Xochitl A.; Isborn, Christine M.; Johnson, Erin R.
2015-05-14
This work investigates the effects of molecular size on the accuracy of density-functional ionization potentials for a set of 28 hydrocarbons, including series of alkanes, alkenes, and oligoacenes. As the system size increases, delocalization error introduces a systematic underestimation of the ionization potential, which is rationalized by considering the fractional-charge behavior of the electronic energies. The computation of the ionization potential with many density-functional approximations is not size-extensive due to excessive delocalization of the incipient positive charge. While inclusion of exact exchange reduces the observed errors, system-specific tuning of long-range corrected functionals does not generally improve accuracy. These results emphasize that good performance of a functional for small molecules is not necessarily transferable to larger systems.
Numerical errors in the presence of steep topography: analysis and alternatives
Lundquist, K A; Chow, F K; Lundquist, J K
2010-04-15
It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and
MULTI-MODE ERROR FIELD CORRECTION ON THE DIII-D TOKAMAK
SCOVILLE, JT; LAHAYE, RJ
2002-10-01
OAK A271 MULTI-MODE ERROR FIELD CORRECTION ON THE DIII-D TOKAMAK. Error field optimization on DIII-D tokamak plasma discharges has routinely been done for the last ten years with the use of the external ''n = 1 coil'' or the ''C-coil''. The optimum level of correction coil current is determined by the ability to avoid the locked mode instability and access previously unstable parameter space at low densities. The locked mode typically has toroidal and poloidal mode numbers n = 1 and m = 2, respectively, and it is this component that initially determined the correction coil current and phase. Realization of the importance of nearby n = 1 mode components m = 1 and m = 3 has led to a revision of the error field correction algorithm. Viscous and toroidal mode coupling effects suggested the need for additional terms in the expression for the radial ''penetration'' field B{sub pen} that can induce a locked mode. To incorporate these effects, the low density locked mode threshold database was expanded. A database of discharges at various toroidal fields, plasma currents, and safety factors was supplement4ed with data from an experiment in which the fields of the n = 1 coil and C-coil were combined, allowing the poloidal mode spectrum of the error field to be varied. A multivariate regression analysis of this new low density locked mode database was done to determine the low density locked mode threshold scaling relationship n{sub e} {proportional_to} B{sub T}{sup -0.01} q{sub 95}{sup -0.79} B{sub pen} and the coefficients of the poloidal mode components in the expression for B{sub pen}. Improved plasma performance is achieved by optimizing B{sub pen} by varying the applied correction coil currents.
Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.
2013-10-15
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.
Types of Possible Survey Errors in Estimates Published in the Weekly
Weekly Natural Gas Storage Report (EIA)
Natural Gas Storage Report Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report Release date: March 1, 2016 The U.S. Energy Information Administration (EIA) collects and publishes natural gas storage information on a monthly and weekly basis. The Form EIA-191, Monthly Underground Natural Gas Storage Report, is a census survey that collects field-level information from all underground natural gas storage operators in the United States known to EIA.
Techniques for reducing error in the calorimetric measurement of low wattage items
Sedlacek, W.A.; Hildner, S.S.; Camp, K.L.; Cremers, T.L.
1993-08-01
The increased need for the measurement of low wattage items with production calorimeters has required the development of techniques to maximize the precision and accuracy of the calorimeter measurements. An error model for calorimetry measurements is presented. This model is used as a basis for optimizing calorimetry measurements through baseline interpolation. The method was applied to the heat measurement of over 100 items and the results compared to chemistry assay and mass spectroscopy.
Mitigating the Effect of Latency Errors Between Remote HIL Systems - Energy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Innovation Portal Mitigating the Effect of Latency Errors Between Remote HIL Systems National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary Several research institutions are pursuing virtually connected, large-scale energy systems integration testbeds through the use of remote hardware-in-the-loop (HIL) techniques. This is driven by the ability to share laboratory resources that are physically separated (often over large geographical distances)