Powered by Deep Web Technologies
Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

APPROPRIATE ZOOPLANKTON SAMPLING METHODOLOGY FOR OTEC SITES  

E-Print Network [OSTI]

Thermal Energy Conversion (OTEC) sites. Participants J.Zooplankton Sampling for OTEC Sites t~thodology Edited bywho gave an overview of the OTEC program, and the policies

Commins, M.L.

2014-01-01T23:59:59.000Z

2

E-Print Network 3.0 - analysis hra methodology Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

in Healthcare Systems Monifa Vaughn-Cooke1 Summary: Assessment (HRA) methodology provides health care professionals with an integrated three stage process... error risk mitigation....

3

E-Print Network 3.0 - association dataset methodological Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

dataset methodological Search Powered by Explorit Topic List Advanced Search Sample search results for: association dataset methodological Page: << < 1 2 3 4 5 > >> 1 IOWA STATE...

4

Sample covariance based estimation of Capon algorithm error probabilities  

E-Print Network [OSTI]

The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...

Richmond, Christ D.

5

Sample size in factor analysis: The role of model error  

E-Print Network [OSTI]

This article examines effects of sample size and other design features on correspondence between factors obtained from analysis of sample data and those present in the population from which the samples were drawn. We extend ...

MacCallum, R. C.; Widaman, K. F.; Preacher, K. J.; Hong, Sehee

2001-01-01T23:59:59.000Z

6

Methodology to quantify leaks in aerosol sampling system components  

E-Print Network [OSTI]

and that approach was used to measure the sealing integrity of a CAM and two kinds of filter holders. The methodology involves use of sulfur hexafluoride as a tracer gas with the device being tested operated under dynamic flow conditions. The leak rates...

Vijayaraghavan, Vishnu Karthik

2004-11-15T23:59:59.000Z

7

Quantifying Errors Associated with Satellite Sampling of Offshore Wind S.C. Pryor1,2  

E-Print Network [OSTI]

1 Quantifying Errors Associated with Satellite Sampling of Offshore Wind Speeds S.C. Pryor1,2 , R, Bloomington, IN47405, USA. Tel: 1-812-855-5155. Fax: 1-812-855-1661 Email: spryor@indiana.edu 2 Dept. of Wind an attractive proposition for measuring wind speeds over the oceans because in principle they also offer

8

Development of methodology to correct sampling error associated with FRM PM10 samplers  

E-Print Network [OSTI]

Currently, a lack of accurate emission data exits for particulate matter (PM) in agricultural air quality studies (USDA-AAQTF, 2000). PM samplers, however, tend to over estimate the concentration of most agricultural dusts because of the interaction...

Chen, Jing

2009-05-15T23:59:59.000Z

9

The U-tube sampling methodology and real-time analysis of geofluids  

SciTech Connect (OSTI)

The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood [1973], provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO{sub 2} storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO{sub 2} from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO{sub 2} storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

2009-03-01T23:59:59.000Z

10

Estimation of the error for small-sample optimal binary filter design using prior knowledge  

E-Print Network [OSTI]

Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...

Sabbagh, David L

1999-01-01T23:59:59.000Z

11

E-Print Network 3.0 - achieved classification error Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

classifier design, and error estimation, which together form a microarray classification pipeline... error rate, reflects how well the classification rule can approximate the...

12

E-Print Network 3.0 - analytic error estimates Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

46 To: Supersite Principal Investigators, Data Managers and Research From: Dr. Paul Solomon, ORD, Dennis Mikel, OAQPS; Mike Jones, OAQPS Summary: of random error (precision)...

13

E-Print Network 3.0 - analysis methodology based Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

of Idaho Collection: Mathematics 17 Identification of Successful Practices in Hydraulic Fracturing Using Summary: , Ameri & Wolhart 9 METHODOLOGY Step two: Fuzzy Combinatorial...

14

E-Print Network 3.0 - activation analysis methodology Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Engineering, West Virginia University Collection: Fossil Fuels 6 A Survey of Service Oriented Development Methodologies Summary: projects. How- ever, a survey on these...

15

E-Print Network 3.0 - attach packaging methodologies Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

2004 -Conference Packaging Data Products using Data Grid Middleware for Deep Space Summary: . (3) No clear methodology or standard exists to describe a process for data product...

16

E-Print Network 3.0 - analysis methodology development Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Approach Through the Use of an Exemplar Summary: - University of Toronto - Canada yu@fis.utoronto.ca Abstract. Systems development methodologies continue... this exemplar to...

17

E-Print Network 3.0 - assessment methodology revision Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Approach Through the Use of an Exemplar Luiz... - University of Toronto - Canada yu@fis.utoronto.ca Abstract. Systems development methodologies ... Source: Cysneiros, Luiz...

18

E-Print Network 3.0 - automated zoning methodology Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Sciences 63 Deformation mechanisms and hydraulic properties of fault zones in unconsolidated sediments; Summary: of the Feldbiss Fault Zone (Fig. 1). Methodology Image...

19

E-Print Network 3.0 - advances methodological issues Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

search results for: advances methodological issues Page: << < 1 2 3 4 5 > >> 1 Architecture Architecture Summary: Architecture Chapter Lecture J.P. Shen, Issuing Rate: N...

20

E-Print Network 3.0 - automated computerized methodology Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

studies. Besides video and audio, it is Summary: methodology for a deep study of troubleshooting activities performed in a computerized environment... the weaknesses or...

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

E-Print Network 3.0 - amplifier design methodology Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

for: amplifier design methodology Page: << < 1 2 3 4 5 > >> 1 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 38, NO. 3, MARCH 2003 511 Active-Feedback Frequency-Compensation...

22

DEVELOPMENT OF METHODOLOGY AND FIELD DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE  

SciTech Connect (OSTI)

This project developed methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of the fuel storage medium and determine the oxide thickness on the spent fuel basin materials. The overall objective of this project was to determine the amount of time fuel has spent in a storage basin to determine if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations. This project developed and validated forensic tools that can be used to predict the age and condition of spent nuclear fuels stored in liquid basins based on key physical, chemical and microbiological basin characteristics. Key parameters were identified based on a literature review, the parameters were used to design test cells for corrosion analyses, tools were purchased to analyze the key parameters, and these were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The key parameters identified in the literature review included chloride concentration, conductivity, and total organic carbon level. Focus was also placed on aluminum based cladding because of their application to weapons production. The literature review was helpful in identifying important parameters, but relationships between these parameters and corrosion rates were not available. Bench scale test systems were designed, operated, harvested, and analyzed to determine corrosion relationships between water parameters and water conditions, chemistry and microbiological conditions. The data from the bench scale system indicated that corrosion rates were dependent on total organic carbon levels and chloride concentrations. The highest corrosion rates were observed in test cells amended with sediment, a large microbial inoculum and an organic carbon source. A complete characterization test kit was field tested to characterize the SRS L-Area spent fuel basin. The sampling kit consisted of a TOC analyzer, a YSI multiprobe, and a thickness probe. The tools were field tested to determine their ease of use, reliability, and determine the quality of data that each tool could provide. Characterization was done over a two day period in June 2011, and confirmed that the L Area basin is a well operated facility with low corrosion potential.

Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

2012-06-04T23:59:59.000Z

23

Dynamic Planning and control Methodology : understanding and managing iterative error and change cycles in large-scale concurrent design and construction projects  

E-Print Network [OSTI]

Construction projects are uncertain and complex in nature. One of the major driving forces that may account for these characteristics is iterative cycles caused by errors and changes. Errors and changes worsen project ...

Lee, Sang Hyun, 1973-

2006-01-01T23:59:59.000Z

24

Analysis of Statistical Sampling in Microarchitecture Simulation: Metric, Methodology and Program Characterization  

E-Print Network [OSTI]

, is a promising technique for estimating the performance of the benchmark program without executing the complete of these three parameters and their interactions on the accuracy of the performance estimate and simulation cost of samples measured as cost parameters. Finally, we characterize 21 SPEC CPU2000 benchmarks based on our

Minnesota, University of

25

Supporting Methods Sampling error  

E-Print Network [OSTI]

of 0.79, 0.07, 0.07, 0.07. The chart below gives a sense of the magnitude of these values. 1 #12 each of the selections are shown in order from highest affinity for GTP to lowest. The top line in each

Heller, Eric

26

A comparison of sample preparation methodology in the evaluation of geosynthetic clay liner (GCL) hydraulic conductivity  

SciTech Connect (OSTI)

The method of preparing a single needle-punched GCL product for evaluation of hydraulic conductivity in a flexible wall permeameter was examined. The test protocol utilized for this evaluation was GRI Test Method GCL-2 Permeability of GCLs. The GCL product consisted of bentonite clay material supported by a woven and a non-woven geotextile on either side. The method preparation focused on the procedure for separating the test specimen from the larger sample and whether these methods produced difficulty in generating reliable test data. The methods examined included cutting with a razor knife, scissors, and a circular die with the perimeter of the test area under wet and dry conditions. In order to generate as much data as possible, tests were kept brief. Flow was monitored only long enough to determine whether or not preferential flow paths appeared to be present. The results appear to indicate that any of the methods involved will work. Difficulties arose not from the development of preferential flow paths around the edges of the specimens, but from the loss of bentonite from the edges during handling.

Siebken, J.R. [National Seal Co., Galesburg, IL (United States); Lucas, S. [Albarrie Naue Ltd., Barrie, Ontario (Canada)

1997-11-01T23:59:59.000Z

27

Mapping Transmission Risk of Lassa Fever in West Africa: The Importance of Quality Control, Sampling Bias, and Error Weighting  

E-Print Network [OSTI]

–78. 21. Panning M, Emmerich P, Olschlager S, Bojenko S, Koivogui L, et al. (2010) Laboratory diagnosis of Lassa fever, Liberia. Emerg Infect Dis 16: 1041–1043. 22. Fielding AH, Bell JF (1997) A review of methods for the assessment of prediction errors...

Peterson, A. Townsend; Moses, Lina M.; Bausch, Daniel G.

2014-08-08T23:59:59.000Z

28

Error detection method  

DOE Patents [OSTI]

An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

Olson, Eric J.

2013-06-11T23:59:59.000Z

29

Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples  

SciTech Connect (OSTI)

Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy?s extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling directly from the large Tank Farm tanks is a difficult, if not unsolvable enterprise due to li

Shine, E. P.; Poirier, M. R.

2013-10-29T23:59:59.000Z

30

Human error contribution to nuclear materials-handling events  

E-Print Network [OSTI]

This thesis analyzes a sample of 15 fuel-handling events from the past ten years at commercial nuclear reactors with significant human error contributions in order to detail the contribution of human error to fuel-handling ...

Sutton, Bradley (Bradley Jordan)

2007-01-01T23:59:59.000Z

31

Distributed Adaptive Sampling Using Bounded-Errors  

E-Print Network [OSTI]

. An example of such networks are fleets of under- water vehicles with embedded sensors, micro methods able to handle rumor . I. INTRODUCTION Over the last few years, sensor networks have been pro, contrary to fixed sensors, this kind of networks do not rely on preliminary computation for sensor

Paris-Sud XI, Université de

32

Integrated fiducial sample mount and software for correlated microscopy  

SciTech Connect (OSTI)

A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mount methodology allows data to be efficiently and easily collected on different instruments for the same sample location.

Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge; Karen E. Wright

2014-02-01T23:59:59.000Z

33

Bounds for Small-Error and Zero-Error Quantum Algorithms Harry Buhrman  

E-Print Network [OSTI]

Bounds for Small-Error and Zero-Error Quantum Algorithms Harry Buhrman CWI Richard Cleve University algorithm with an auxiliary input r, which is uniformly distributed over some underlying sample space. In this case, for any x 2 f0;1gn, f(x) = 1 iff (9r 2 S)(A(x;r) = 1). Grover's quantum search algorithm [15

de Wolf, Ronald

34

Analyzing sampling methodologies in semiconductor manufacturing  

E-Print Network [OSTI]

This thesis describes work completed during an internship assignment at Intel Corporation's process development and wafer fabrication manufacturing facility in Santa Clara, California. At the highest level, this work relates ...

Anthony, Richard M. (Richard Morgan), 1971-

2004-01-01T23:59:59.000Z

35

Methodology for characterizing modeling and discretization uncertainties in computational simulation  

SciTech Connect (OSTI)

This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

2000-03-01T23:59:59.000Z

36

Quantum error control codes  

E-Print Network [OSTI]

QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2008 Major... Subject: Computer Science QUANTUM ERROR CONTROL CODES A Dissertation by SALAH ABDELHAMID AWAD ALY AHMED Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY...

Abdelhamid Awad Aly Ahmed, Sala

2008-10-10T23:59:59.000Z

37

Dynamic Prediction of Concurrency Errors  

E-Print Network [OSTI]

Relation 15 Must-Before Race Prediction 16 Implementation 17viii Abstract Dynamic Prediction of Concurrency Errors bySANTA CRUZ DYNAMIC PREDICTION OF CONCURRENCY ERRORS A

Sadowski, Caitlin

2012-01-01T23:59:59.000Z

38

EIA - Sorry! Unexpected Error  

Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary) " ,"ClickPipelines AboutDecember 2005 (Thousand BarrelsNatural GasCold Fusion Error Unexpected Error Sorry!

39

Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates  

SciTech Connect (OSTI)

We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

2009-01-01T23:59:59.000Z

40

Uncertainty and error in computational simulations  

SciTech Connect (OSTI)

The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.

Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.

1997-10-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

Modular error embedding  

DOE Patents [OSTI]

A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

Sandford, II, Maxwell T. (Los Alamos, NM); Handel, Theodore G. (Los Alamos, NM); Ettinger, J. Mark (Los Alamos, NM)

1999-01-01T23:59:59.000Z

42

Validation of error estimators and superconvergence by a computer-based approach  

E-Print Network [OSTI]

ESTIMATORS FOR PATCHWISE UNIFORM MESHES 5. 1 The methodology for checking the estimators. 5. 2 Numerical study of the estimators. . . . . 5. 3 Major results. 58 . . 58 65 160 CHAPTER VI STUDY OF THE ERROR ESTIMATORS FOR GENERAL MESHES . . 6. 1... Definition of the robustness index 6. 2 The computational methodology for general grids . . 6. 3 Numerical studies of robustness of various error estimators . . 6. 4 Major results. vn Page 164 164 166 167 , 198 CHAPTER VII STUDY OF SUPERCONVERGENCE...

Upadhyay, Chandra Shekhar

2012-06-07T23:59:59.000Z

43

Approaches to Quantum Error Correction  

E-Print Network [OSTI]

The purpose of this little survey is to give a simple description of the main approaches to quantum error correction and quantum fault-tolerance. Our goal is to convey the necessary intuitions both for the problems and their solutions in this area. After characterising quantum errors we present several error-correction schemes and outline the elements of a full fledged fault-tolerant computation, which works error-free even though all of its components can be faulty. We also mention alternative approaches to error-correction, so called error-avoiding or decoherence-free schemes. Technical details and generalisations are kept to a minimum.

Julia Kempe

2006-12-21T23:59:59.000Z

44

Quasi-sparse eigenvector diagonalization and stochastic error correction  

E-Print Network [OSTI]

We briefly review the diagonalization of quantum Hamiltonians using the quasi-sparse eigenvector (QSE) method. We also introduce the technique of stochastic error correction, which systematically removes the truncation error of the QSE result by stochastically sampling the contribution of the remaining basis states.

Dean Lee

2000-08-30T23:59:59.000Z

45

Unequal Error Protection Turbo Codes  

E-Print Network [OSTI]

Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 #12;Unequal Error Protection Turbo Convolutional Codes / Turbo Codes 18 3.1 Structure

Henkel, Werner

46

Errors of Nonobservation  

Gasoline and Diesel Fuel Update (EIA)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary) " ,"ClickPipelines About U.S.30Natural Gas Glossary529 633 6221,2372003of Energy for39 TableErrors of

47

Statistical Error in Particle Simulations of Low Mach Number Flows  

E-Print Network [OSTI]

We present predictions for the statistical error due to finite sampling in the presence of thermal fluctuations in molecular simulation algorithms. Expressions for the fluid velocity, density and temperature are derived ...

Hadjiconstantinou, Nicolas G.

48

Software Function Allocation Methodology  

E-Print Network [OSTI]

, 2 The Jackson Methodology 2. 1. 3 Higher Order Software 2. 1. 4 Structured Analysis and Design Technique 2. 1. 5 Software Requirements Engineering Methodology 2. 1. 6 Software Development System 2. 2 Relation to Current Research 3. SFAM... BACKGROUND 3. 1 SPAN Preconditions 3. 2 SFAM Concepts 3. 3 SFAM Environment 12 13 3. 3. 1 SPAM Decision Types 3. 3. 2 SFAM Goal 3. 3. 3 SFAM Environment Summary 13 16 16 TABLE OF CONTENTS (Continued) Page 3. 4 SFAM Outline 17 3. 4. 1 SFAM Step...

O'Neal, Michael Ralph

1988-01-01T23:59:59.000Z

49

Nested Quantum Error Correction Codes  

E-Print Network [OSTI]

The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

Zhuo Wang; Kai Sun; Hen Fan; Vlatko Vedral

2009-09-28T23:59:59.000Z

50

A Scalable Soft Spot Analysis Methodology for Compound Noise Effects in Nano-meter Circuits  

E-Print Network [OSTI]

A Scalable Soft Spot Analysis Methodology for Compound Noise Effects in Nano-meter Circuits Chong@ece.ucsd.edu ABSTRACT Circuits using nano-meter technologies are becoming increasingly vulnerable to signal interference methodology to study the vulnerability of digital ICs exposed to nano-meter noise and transient soft errors

California at San Diego, University of

51

Finding beam focus errors automatically  

SciTech Connect (OSTI)

An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors. (LEW)

Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

1987-01-01T23:59:59.000Z

52

Data& Error Analysis 1 DATA and ERROR ANALYSIS  

E-Print Network [OSTI]

Data& Error Analysis 1 DATA and ERROR ANALYSIS Performing the experiment and collecting data learned, you might get a better grade.) Data analysis should NOT be delayed until all of the data. This will help one avoid the problem of spending an entire class collecting bad data because of a mistake

Mukasyan, Alexander

53

Experimental Uncertainties (Errors) Sources of Experimental Uncertainties (Experimental Errors)  

E-Print Network [OSTI]

the preparation of the lab report. A calculator should 1. Bevington, P. R., Data Reduction and Error Analysis for the Physical Sciences, New York: McGraw-Hill, 1969. 2. Taylor, J. R., An introduction to uncertainty analysis in the lab. In this laboratory, we keep to a very simple form of error analysis, our purpose being more

Mukasyan, Alexander

54

Pressure Change Measurement Leak Testing Errors  

SciTech Connect (OSTI)

A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

Pryor, Jeff M [ORNL] [ORNL; Walker, William C [ORNL] [ORNL

2014-01-01T23:59:59.000Z

55

BASF's Energy Survey Methodology  

E-Print Network [OSTI]

and cost breakdowns by utility types are identified to further analyze trends. Consideration is given to the review of the various energy supply contracts for alternative options that may exist. The consumption history is used to create a distribution...BASF?s Energy Survey Methodology Thomas R. Theising BASF Corporation operates several dozen manufacturing Sites within NAFTA and periodically conducts Energy Surveys at each Site. Although these manufacturing sites represent a variety...

Theising, T. R.

2005-01-01T23:59:59.000Z

56

Electronic Survey Methodology Page 1 Electronic Survey Methodology  

E-Print Network [OSTI]

Electronic Survey Methodology Page 1 Electronic Survey Methodology: A Case Study in Reaching Hard, Maryland preece@umbc.edu 2002 © Andrews, Nonnecke and Preece #12;Electronic Survey Methodology Page 2 Conducting Research on the Internet: Electronic survey Design, Development and Implementation Guidelines

Nonnecke, Blair

57

Cogeneration Assessment Methodology for Utilities  

E-Print Network [OSTI]

A methodology is presented that enables electric utilities to assess the cogeneration potential among industrial, commercial, and institutional customers within the utility's service area. The methodology includes a survey design, analytic...

Sedlik, B.

1983-01-01T23:59:59.000Z

58

Static Detection of Disassembly Errors  

SciTech Connect (OSTI)

Static disassembly is a crucial ?rst step in reverse engineering executable ?les, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K.

2009-10-13T23:59:59.000Z

59

Unequal error protection of subband coded bits  

E-Print Network [OSTI]

Source coded data can be separated into different classes based on their susceptibility to channel errors. Errors in the Important bits cause greater distortion in the reconstructed signal. This thesis presents an Unequal Error Protection scheme...

Devalla, Badarinath

2012-06-07T23:59:59.000Z

60

Two-Layer Error Control Codes Combining Rectangular and Hamming Product Codes for Cache Error  

E-Print Network [OSTI]

We propose a novel two-layer error control code, combining error detection capability of rectangular codes and error correction capability of Hamming product codes in an efficient way, in order to increase cache error ...

Zhang, Meilin

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Error 404 - Document not found  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmospheric Optical Depth7-1D: Vegetation ProposedUsing ZirconiaPolicy and Assistance100 ton Stanat rollinggovErrors ERROR 404 - URL Not

62

Distributed Error Confinement Extended Abstract  

E-Print Network [OSTI]

. These algorithms can serve as building blocks in more general reactive systems. Previous results in exploring locality in reactive systems were not error confined, and relied on the assump- tion (not used in current, that seems inherent for voting in reactive networks; its analysis leads to an interesting combinatorial

Patt-Shamir, Boaz

63

Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140; Preprint  

SciTech Connect (OSTI)

Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140 (ANSI/ASHRAE 2001, 2004), Method of Test for the Evaluation of Building Energy Analysis Computer Programs. A summary of the method is included in the ASHRAE Handbook of Fundamentals (ASHRAE 2005). This paper describes the ANSI/ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to Standard 140 and related research recommendations.

Judkoff, R.; Neymark, J.

2006-07-01T23:59:59.000Z

64

A Method for Treating Discretization Error in Nondeterministic Analysis  

SciTech Connect (OSTI)

A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.

Alvin, K.F.

1999-01-27T23:59:59.000Z

65

Methodology for computational fluid dynamics code verification/validation  

SciTech Connect (OSTI)

The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.

Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.

1995-07-01T23:59:59.000Z

66

Methodology to Analyze the Sensitivity of Building Energy Consumption to HVAC System Sensor Error  

E-Print Network [OSTI]

parameters. There are a total of eight scenarios considered in this simulation. The simulation tool was developed based on Excel. The control parameters examined include room temperature, cold deck temperature, hot deck temperature, pump pressure, and fan...

Ma, Liang

2012-02-14T23:59:59.000Z

67

Phase Errors and the Capture Effect  

SciTech Connect (OSTI)

This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

Blair, J., and Machorro, E.

2011-11-01T23:59:59.000Z

68

Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities  

SciTech Connect (OSTI)

Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

Auflick, Jack L.

1999-04-21T23:59:59.000Z

69

Experimental methodology for computational fluid dynamics code validation  

SciTech Connect (OSTI)

Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.

Aeschliman, D.P.; Oberkampf, W.L.

1997-09-01T23:59:59.000Z

70

Approximate error conjugation gradient minimization methods  

DOE Patents [OSTI]

In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

Kallman, Jeffrey S

2013-05-21T23:59:59.000Z

71

CHEMICAL LABORATORY SAFETY AND METHODOLOGY  

E-Print Network [OSTI]

CHEMICAL LABORATORY SAFETY AND METHODOLOGY MANUAL August 2013 #12;ii Emergency Numbers UNBC Prince-Emergency Numbers UNBC Prince George Campus Chemstores 6472 Chemical Safety 6472 Radiation Safety 6472 Biological the safe use, storage, handling, waste and emergency management of chemicals on the University of Northern

Northern British Columbia, University of

72

E-Print Network 3.0 - accelerated failure time Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Sample search results for: accelerated failure time Page: << < 1 2 3 4 5 > >> 1 A Reliability Evaluation Methodology for Memory Chips for Space Applications when Sample Size is...

73

Estimating IMU heading error from SAR images.  

SciTech Connect (OSTI)

Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

Doerry, Armin Walter

2009-03-01T23:59:59.000Z

74

Flux recovery and a posteriori error estimators  

E-Print Network [OSTI]

bility and the local efficiency bounds for this estimator are established provided that the ... For simple model problems, the energy norm of the true error is equal.

2010-05-20T23:59:59.000Z

75

Verification of unfold error estimates in the unfold operator code  

SciTech Connect (OSTI)

Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

1997-01-01T23:59:59.000Z

76

NUREG-1150 risk assessment methodology  

SciTech Connect (OSTI)

This paper describes the methodology developed in support of the US Nuclear Regulatory Commission's (NCR's) evaluation of severe accident risks in NUREG-1150. After the accident at Three Mile Island, Unit 2, the NRC initiated a sever accident research program to develop an improved understanding of severe accidents and to provide a second technical basis to support regulatory decisions in this area. A key product of this program is NUREG-1150, which provides estimates of risk for several nuclear reactors of different design. The principal technical analyses for NUREG-1150 were performed at Sandia National Labs. under the Severe Accident Risk Reduction Program and the Accident Sequence Evaluation Program. A major aspect of the work was the development of a methodology that improved upon previous full-scale probabilistic risk assessments (PRA) in several areas which are described.

Benjamin, A.S.; Amos, C.N.; Cunningham, M.A.; Murphy, J.A.

1987-01-01T23:59:59.000Z

77

ISE System Development Methodology Manual  

SciTech Connect (OSTI)

The Information Systems Engineering (ISE) System Development Methodology Manual (SDM) is a framework of life cycle management guidelines that provide ISE personnel with direction, organization, consistency, and improved communication when developing and maintaining systems. These guide-lines were designed to allow ISE to build and deliver Total Quality products, and to meet the goals and requirements of the US Department of Energy (DOE), Westinghouse Savannah River Company, and Westinghouse Electric Corporation.

Hayhoe, G.F.

1992-02-17T23:59:59.000Z

78

Implementation impacts of PRL methodology  

SciTech Connect (OSTI)

This report responds to a DOE-SR request to evaluate the impacts from implementation of the proposed Plutonium Recovery Limit (PRL) methodology. The PRL Methodology is based on cost minimization for decisions to discard or recover plutonium contained in scrap, residues, and other plutonium bearing materials. Implementation of the PRL methodology may result in decisions to declare as waste certain plutonium bearing materials originally considered to be a recoverable plutonium product. Such decisions may have regulatory impacts, because any material declared to be waste would immediately be subject to provisions of the Resource Conservation and Recovery Act (RCRA). The decision to discard these materials will have impacts on waste storage, treatment, and disposal facilities. Current plans for the de-inventory of plutonium processing facilities have identified certain materials as candidates for discard based upon the economic considerations associated with extending the operating schedules for recovery of the contained plutonium versus potential waste disposal costs. This report evaluates the impacts of discarding those materials as proposed by the F Area De-Inventory Plan and compares the De-Inventory Plan assessments with conclusions from application of the PRL. The impact analysis was performed for those materials proposed as potential candidates for discard by the De-Inventory Plan. The De-Inventory Plan identified 433 items, containing approximately 1% of the current SRS Pu-239 inventory, as not appropriate for recovery as the site moves to complete the mission of F-Canyon and FB-Line. The materials were entered into storage awaiting recovery as product under the Department`s previous Economic Discard Limit (EDL) methodology which valued plutonium at its incremental cost of production in reactors. An application of Departmental PRLs to the subject 433 items revealed that approximately 40% of them would continue to be potentially recoverable as product plutonium.

Caudill, J.A.; Krupa, J.F.; Meadors, R.E.; Odum, J.V.; Rodrigues, G.C.

1993-02-01T23:59:59.000Z

79

Energy Efficiency Indicators Methodology Booklet  

SciTech Connect (OSTI)

This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

2010-05-01T23:59:59.000Z

80

Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)  

SciTech Connect (OSTI)

This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.

Hodge, B. M.; Milligan, M.

2011-07-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Remarks on statistical errors in equivalent widths  

E-Print Network [OSTI]

Equivalent width measurements for rapid line variability in atomic spectral lines are degraded by increasing error bars with shorter exposure times. We derive an expression for the error of the line equivalent width $\\sigma(W_\\lambda)$ with respect to pure photon noise statistics and provide a correction value for previous calculations.

Klaus Vollmann; Thomas Eversberg

2006-07-03T23:59:59.000Z

82

Quantum Error Correction Beyond Completely Positive Maps  

E-Print Network [OSTI]

By introducing an operator sum representation for arbitrary linear maps, we develop a generalized theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This theory of "linear quantum error correction" is applicable in cases where the standard and restrictive assumption of a factorized initial system-bath state does not apply.

A. Shabani; D. A. Lidar

2009-10-21T23:59:59.000Z

83

Meeting 12 February 25, 1999 Error Measure  

E-Print Network [OSTI]

. The value of â?? is the corresponding eigenvalue. The eigen­ values are the roots of the characteristic distances is non­negative, so Q is pos­ itive semi­definite. The error of an edge contraction is obtained paraboloid as illustrated in Figure 3. In other words, the preimage of a constant error value ffl, E \\Gamma1

California at Berkeley, University of

84

WRAP Module 1 sampling and analysis plan  

SciTech Connect (OSTI)

This document provides the methodology to sample, screen, and analyze waste generated, processed, or otherwise the responsibility of the Waste Receiving and Processing Module 1 facility. This includes Low-Level Waste, Transuranic Waste, Mixed Waste, and Dangerous Waste.

Mayancsik, B.A.

1995-03-24T23:59:59.000Z

85

Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES  

E-Print Network [OSTI]

Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES A computational model of event segmentation from perceptual prediction. Jeremy R. Reynolds, Jeffrey M. Zacks, and Todd S. Braver Washington University Manuscript #12;Prediction Error and Event Boundaries 2 People tend

Zacks, Jeffrey M.

86

Scalable extraction of error models from the output of error detection circuits  

E-Print Network [OSTI]

Accurate methods of assessing the performance of quantum gates are extremely important. Quantum process tomography and randomized benchmarking are the current favored methods. Quantum process tomography gives detailed information, but significant approximations must be made to reduce this information to a form quantum error correction simulations can use. Randomized benchmarking typically outputs just a single number, the fidelity, giving no information on the structure of errors during the gate. Neither method is optimized to assess gate performance within an error detection circuit, where gates will be actually used in a large-scale quantum computer. Specifically, the important issues of error composition and error propagation lie outside the scope of both methods. We present a fast, simple, and scalable method of obtaining exactly the information required to perform effective quantum error correction from the output of continuously running error detection circuits, enabling accurate prediction of large-scale behavior.

Austin G. Fowler; D. Sank; J. Kelly; R. Barends; John M. Martinis

2014-05-06T23:59:59.000Z

87

Quantum Error Correction for Quantum Memories  

E-Print Network [OSTI]

Active quantum error correction using qubit stabilizer codes has emerged as a promising, but experimentally challenging, engineering program for building a universal quantum computer. In this review we consider the formalism of qubit stabilizer and subsystem stabilizer codes and their possible use in protecting quantum information in a quantum memory. We review the theory of fault-tolerance and quantum error-correction, discuss examples of various codes and code constructions, the general quantum error correction conditions, the noise threshold, the special role played by Clifford gates and the route towards fault-tolerant universal quantum computation. The second part of the review is focused on providing an overview of quantum error correction using two-dimensional (topological) codes, in particular the surface code architecture. We discuss the complexity of decoding and the notion of passive or self-correcting quantum memories. The review does not focus on a particular technology but discusses topics that will be relevant for various quantum technologies.

Barbara M. Terhal

2015-01-20T23:59:59.000Z

88

Facemail : preventing common errors when composing email  

E-Print Network [OSTI]

Facemail is a system designed to investigate and prevent common errors that users make while composing emails. Users often accidentally send email to incorrect recipients by mistyping an email address, accidentally clicking ...

Lieberman, Eric (Eric W.)

2006-01-01T23:59:59.000Z

89

Organizational Errors: Directions for Future Research  

E-Print Network [OSTI]

The goal of this chapter is to promote research about organizational errors—i.e., the actions of multiple organizational participants that deviate from organizationally specified rules and can potentially result in adverse ...

Carroll, John Stephen

90

Quantum error-correcting codes and devices  

DOE Patents [OSTI]

A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.

Gottesman, Daniel (Los Alamos, NM)

2000-10-03T23:59:59.000Z

91

Simulation Enabled Safeguards Assessment Methodology  

SciTech Connect (OSTI)

It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

Robert Bean; Trond Bjornard; Thomas Larson

2007-09-01T23:59:59.000Z

92

Estimating the error in simulation prediction over the design space  

SciTech Connect (OSTI)

This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.

Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)

2003-01-01T23:59:59.000Z

93

E-Print Network 3.0 - ageing Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Advanced Search Sample search results for: ageing Page: << < 1 2 3 4 5 > >> 1 Errors in fish ageing may result in biases in stock assessments and Summary: 256 Errors in fish ageing...

94

E-Print Network 3.0 - age Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Advanced Search Sample search results for: age Page: << < 1 2 3 4 5 > >> 1 Errors in fish ageing may result in biases in stock assessments and Summary: 256 Errors in fish ageing...

95

E-Print Network 3.0 - aging Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Advanced Search Sample search results for: aging Page: << < 1 2 3 4 5 > >> 1 Errors in fish ageing may result in biases in stock assessments and Summary: 256 Errors in fish ageing...

96

E-Print Network 3.0 - aged Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Advanced Search Sample search results for: aged Page: << < 1 2 3 4 5 > >> 1 Errors in fish ageing may result in biases in stock assessments and Summary: 256 Errors in fish ageing...

97

Evaluating operating system vulnerability to memory errors.  

SciTech Connect (OSTI)

Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian

2012-05-01T23:59:59.000Z

98

The Error-Pattern-Correcting Turbo Equalizer  

E-Print Network [OSTI]

The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.

Alhussien, Hakim

2010-01-01T23:59:59.000Z

99

Methodology for Augmenting Existing Paths with Additional Parallel Transects  

SciTech Connect (OSTI)

Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single path—the shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

Wilson, John E.

2013-09-30T23:59:59.000Z

100

Methodology for Validating Building Energy Analysis Simulations  

SciTech Connect (OSTI)

The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

Judkoff, R.; Wortman, D.; O'Doherty, B.; Burch, J.

2008-04-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

Electric Utility Demand-Side Evaluation Methodologies  

E-Print Network [OSTI]

, in the case of electric utilities society and the ratepayer. Commissio~ Substanti ve Rul es Sec. 23.22 stops short of specifying an evaluation methodology or requiring a benefit-cost analysis for each conservation program, but it does require that util... of view using a standard benefit-cost methodology. The methodology now in use by several. electric utilities and the Public Utility Commlsslon of Texas includes measures of efficiency and equity. The nonparticipant test as a measure of equity...

Treadway, N.

102

A systems approach to reducing utility billing errors  

E-Print Network [OSTI]

Many methods for analyzing the possibility of errors are practiced by organizations who are concerned about safety and error prevention. However, in situations where the error occurrence is random and difficult to track, ...

Ogura, Nori

2013-01-01T23:59:59.000Z

103

Error Detection and Recovery for Robot Motion Planning with Uncertainty  

E-Print Network [OSTI]

Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has ...

Donald, Bruce Randall

1987-07-01T23:59:59.000Z

104

Small sample feature selection  

E-Print Network [OSTI]

I INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . 1 II CLASSIFIER ERROR ESTIMATION . . . . . . . . . . . . . . . 3 A. Classical Error Estimation Problem . . . . . . . . . . . . . 3 B. Classical Error Estimation... . . . . . . . . . . . . . . . . . . 4 C. Bolstered Error Estimation . . . . . . . . . . . . . . . . . . 5 1. Choosing Bolstering Kernel . . . . . . . . . . . . . . . 8 2. Choosing the Amount of Bolstering . . . . . . . . . . 9 3. Gaussian-Bolstered Error Estimation...

Sima, Chao

2007-09-17T23:59:59.000Z

105

Running jobs error: "inet_arp_address_lookup"  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

jobs error: "inetarpaddresslookup" Resolved: Running jobs error: "inetarpaddresslookup" September 22, 2013 by Helen He (0 Comments) Symptom: After the Hopper August 14...

106

T-598: Apache Tomcat HTTP BIO Connector Error Discloses Information...  

Broader source: Energy.gov (indexed) [DOE]

598: Apache Tomcat HTTP BIO Connector Error Discloses Information From Different Requests to Remote Users T-598: Apache Tomcat HTTP BIO Connector Error Discloses Information From...

107

V-235: Cisco Mobility Services Engine Configuration Error Lets...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

5: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login...

108

EMCAS, an evaluation methodology for safeguards and security systems  

SciTech Connect (OSTI)

EMCAS is an evaluation methodology for safeguards and security systems. It provides a score card of projected or actual system performance for several areas of system operation. In one area, the performance of material control and accounting and security systems, which jointly defend against the insider threat to divert or steal special nuclear material (SNM) using stealth and deceit, is evaluated. Time-dependent and time-independent risk equations are used for both diversion and theft risk calculations. In the case of loss detection by material accounting, a detailed timeliness model is provided to determine the combined effects of loss detection sensitivity and timeliness on the overall effectiveness of the material accounting detection procedure. Calculated risks take into account the capabilities of process area containment/surveillance, material accounting mass balance tests, and physical protection barriers and procedures. In addition, EMCAS evaluates the Material Control and Accounting (MCandA) System in the following areas: (1) system capability to detect errors in the official book inventory of SNM, using mass balance accounting methods, (2) system capability to prevent errors from entering the nuclear material data base during periods of operation between mass balance tests, (3) time to conduct inventories and resolve alarms, and (4) time lost from production to carry out material control and accounting loss detection activities.

Eggers, R.F.; Giese, E.W.; Bichl, F.J.

1987-07-01T23:59:59.000Z

109

EMCAS: An evaluation methodology for safeguards and security systems  

SciTech Connect (OSTI)

EMCAS is an evaluation methodology for safeguards and security systems. It provides a score card of projected or actual system performance for several areas of system operation. In one area, the performance of material control and accounting and security systems, which jointly defend against the insider threat to divert or steal special nuclear material (SNM) using stealth and deceit, is evaluated. Time-dependent and time-independent risk equations are used for both diversion and theft risk calculations. In the case of loss detection by material accounting, a detailed timeliness model is provided to determine the combined effects of loss detection sensitivity and timeliness on the overall effectiveness of the material accounting detection procedure. Calculated risks take into account the capabilities of process area containment/surveillance, material accounting mass balance tests, and physical protection barriers and procedures. In addition, EMCAS evaluates the Material Control and Accounting (MC and A) System in the following areas: (1) system capability to detect errors in the official book inventory of SNM, using mass balance accounting methods, (2) system capability to prevent errors from entering the nuclear material data base during periods of operation between mass balance tests, (3) time to conduct inventories and resolve alarms, and (4) time lost from production to carry out material control and accounting loss detection activities. 3 figs., 5 tabs.

Eggers, R.F.; Giese, E.W.; Bichl, F.J.

1987-01-01T23:59:59.000Z

110

Retiming for Soft Error Minimization Under Error-Latching Window Constraints  

E-Print Network [OSTI]

sensitivity to naturally- occurring radiation and the consequent soft error rates of CMOS circuits. Moreover Soft error, also known as single-event upsets (SEU), caused by radiation-induced charged particles circuits [3]: electrical masking occurs when SEUs are attenuated before being latched because

Zhou, Hai

111

Laser Phase Errors in Seeded FELs  

SciTech Connect (OSTI)

Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

2012-03-28T23:59:59.000Z

112

MEASUREMENT AND CORRECTION OF ULTRASONIC ANEMOMETER ERRORS  

E-Print Network [OSTI]

commonly show systematic errors depending on wind speed due to inaccurate ultrasonic transducer mounting three- dimensional wind speed time series. Results for the variance and power spectra are shown. 1 wind speeds with ultrasonic anemometers: The measu- red flow is distorted by the probe head

Heinemann, Detlev

113

Hierarchical Classification of Documents with Error Control  

E-Print Network [OSTI]

Hierarchical Classification of Documents with Error Control Chun-hung Cheng1 , Jian Tang2 , Ada Wai is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents). The traditional method

King, Kuo Chin Irwin

114

Hierarchical Classification of Documents with Error Control  

E-Print Network [OSTI]

Hierarchical Classification of Documents with Error Control Chun­hung Cheng 1 , Jian Tang 2 , Ada. Classification is a function that matches a new object with one of the predefined classes. Document classification is characterized by the large number of attributes involved in the objects (documents

Fu, Ada Waichee

115

Adjoint Error Correction for Integral Outputs  

E-Print Network [OSTI]

a combustor; the total heat ux into a high pressure turbine blade from the surrounding ow; average noise. As an example, consider the wake behind a wing. To adequately resolve the wake requires a #12;ne grid locally in which the grid resolution is rather coarse. Grid adaptation based on error estimates that look

Pierce, Niles A.

116

Verifying Volume Rendering Using Discretization Error Analysis  

E-Print Network [OSTI]

Verifying Volume Rendering Using Discretization Error Analysis Tiago Etiene, Daniel Jo¨nsson, Timo--We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization

Kirby, Mike

117

Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)  

SciTech Connect (OSTI)

This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.

Hodge, B. M.; Ela, E.; Milligan, M.

2011-10-01T23:59:59.000Z

118

Statistical Analysis of CCD Data: Error Analysis/Noise Theorem  

E-Print Network [OSTI]

Statistical Analysis of CCD Data: Error Analysis/Noise Theorem Why Statistical Approach? Systematic Errors Random Errors (= Statistical Errors) Accuracy and Precision Best Estimator: Mean, Median Distribution Statistical CCD Data Analysis #12;Why do we need statistical analysis? (= Why do we need to worry

Peletier, Reynier

119

Plasma dynamics and a significant error of macroscopic averaging  

E-Print Network [OSTI]

The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasma in conductors at room temperatures. The corrected procedures of macroscopic averaging will induce some changes in the present form of plasma dynamics equations. The modified equations will help to design improved systems of plasma confinement.

Marek A. Szalek

2005-05-22T23:59:59.000Z

120

Discussion on common errors in analyzing sea level accelerations, solar trends and global warming  

E-Print Network [OSTI]

Errors in applying regression models and wavelet filters used to analyze geophysical signals are discussed: (1) multidecadal natural oscillations (e.g. the quasi 60-year Atlantic Multidecadal Oscillation (AMO), North Atlantic Oscillation (NAO) and Pacific Decadal Oscillation (PDO)) need to be taken into account for properly quantifying anomalous accelerations in tide gauge records such as in New York City; (2) uncertainties and multicollinearity among climate forcing functions prevent a proper evaluation of the solar contribution to the 20th century global surface temperature warming using overloaded linear regression models during the 1900-2000 period alone; (3) when periodic wavelet filters, which require that a record is pre-processed with a reflection methodology, are improperly applied to decompose non-stationary solar and climatic time series, Gibbs boundary artifacts emerge yielding misleading physical interpretations. By correcting these errors and using optimized regression models that reduce multico...

Scafetta, Nicola

2013-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Adaptive Sampling in Hierarchical Simulation  

SciTech Connect (OSTI)

We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

2007-07-09T23:59:59.000Z

122

Development of a statistically based access delay timeline methodology.  

SciTech Connect (OSTI)

The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

2013-02-01T23:59:59.000Z

123

Verification of unfold error estimates in the UFO code  

SciTech Connect (OSTI)

Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

Fehl, D.L.; Biggs, F.

1996-07-01T23:59:59.000Z

124

A generalized optimization methodology for isotope management  

E-Print Network [OSTI]

This research, funded by the Department of Energy's Advanced Fuel Cycle Initiative Fellowship, was focused on developing a new approach to studying the nuclear fuel cycle: instead of using the trial and error approach ...

Massie, Mark (Mark Edward)

2010-01-01T23:59:59.000Z

125

E-Print Network 3.0 - annotation assessment project Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

assessment project Search Powered by Explorit Topic List Advanced Search Sample search results for: annotation assessment project Page: << < 1 2 3 4 5 > >> 1 A Methodology towards...

126

NASA Surface meteorology and Solar Energy: Methodology  

E-Print Network [OSTI]

1 NASA Surface meteorology and Solar Energy: Methodology Energy Technology (RET) projects. These climatological profiles are used for designing systems that have for implementing RETs, there are inherent problems in using them for resource assessment. Ground measurement

Firestone, Jeremy

127

Geologic selection methodology for transportation corridor routing  

E-Print Network [OSTI]

A lack of planning techniques and processes on long, linear, cut and cover-tunneling route transportation systems has resulted because of the advancement of transportation systems into underground corridors. The proposed methodology is tested...

Shultz, Karin Wilson

2002-01-01T23:59:59.000Z

128

Analysis Methodology for Industrial Load Profiles  

E-Print Network [OSTI]

ANALYSIS METHODOLOGY FOR INDUSTRIAL LOAD PROFILES Thomas W. Reddoch Executive Vice President Eleclrolek Concepts, Inc. Knoxvillc, Tennessee ABSTRACT A methodology is provided for evaluating the impact of various demand-side management... (OSM) options on industrial customers. The basic approach uses customer metered load profile data as a basis for the customer load shape. OSM technologies are represented as load shapes and are used as a basis for altering the customers existing...

Reddoch, T. W.

129

Statistical Error analysis of Nucleon-Nucleon phenomenological potentials  

E-Print Network [OSTI]

Nucleon-Nucleon potentials are commonplace in nuclear physics and are determined from a finite number of experimental data with limited precision sampling the scattering process. We study the statistical assumptions implicit in the standard least squares fitting procedure and apply, along with more conventional tests, a tail sensitive quantile-quantile test as a simple and confident tool to verify the normality of residuals. We show that the fulfilment of normality tests is linked to a judicious and consistent selection of a nucleon-nucleon database. These considerations prove crucial to a proper statistical error analysis and uncertainty propagation. We illustrate these issues by analyzing about 8000 proton-proton and neutron-proton scattering published data. This enables the construction of potentials meeting all statistical requirements necessary for statistical uncertainty estimates in nuclear structure calculations.

R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola

2014-06-10T23:59:59.000Z

130

Review and evaluation of paleohydrologic methodologies  

SciTech Connect (OSTI)

A literature review was conducted to identify methodologies that could be used to interpret paleohydrologic environments. Paleohydrology is the study of past hydrologic systems or of the past behavior of an existing hydrologic system. The purpose of the review was to evaluate how well these methodologies could be applied to the siting of low-level radioactive waste facilities. The computer literature search queried five bibliographical data bases containing over five million citations of technical journals, books, conference papers, and reports. Two data-base searches (United States Geological Survey - USGS) and a manual search were also conducted. The methodologies were examined for data requirements and sensitivity limits. Paleohydrologic interpretations are uncertain because of the effects of time on hydrologic and geologic systems and because of the complexity of fluvial systems. Paleoflow determinations appear in many cases to be order-of-magnitude estimates. However, the methodologies identified in this report mitigate this uncertainty when used collectively as well as independently. That is, the data from individual methodologies can be compared or combined to corroborate hydrologic predictions. In this manner, paleohydrologic methodologies are viable tools to assist in evaluating the likely future hydrology of low-level radioactive waste sites.

Foley, M.G.; Zimmerman, D.A.; Doesburg, J.M.; Thorne, P.D.

1982-12-01T23:59:59.000Z

131

Systematic Errors in measurement of b1  

SciTech Connect (OSTI)

A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b(1) structure function at Jefferson Lab.

Wood, S A

2014-10-27T23:59:59.000Z

132

Initial quantification of human error associated with specific instrumentation and control system components in licensed nuclear power plants  

SciTech Connect (OSTI)

This report provides a methodology for the initial quantification of specific categories of human errors made in conjunction with several instrumentation and control (I and C) system components operated, maintained, and tested in licensed nuclear power plants. The resultant human error rates (HER) provide the first real systems bases of comparison for the existing derived and/or best judgement equivalent set of such rates or probabilities. These calculated error rates also provide the first real indication of human performance as it relates directly to specific tasks in nuclear plants. This work of developing specific HERs is both an extension of and an outgrowth of the generic HERs developed for safety system pumpc and valves as reported in NUREG/CR-1880.

Luckas, W.J. Jr.; Lettieri, V.; Hall, R.E.

1982-02-01T23:59:59.000Z

133

Initial quantification of human error associated with specific instrumentation and control system components in licensed nuclear power plants  

SciTech Connect (OSTI)

This report provides a methodology for the initial quantification of specific categories of human errors made in conjunction with several instrumentation and control (I and C) system components operated, maintained, and tested in licensed nuclear power plants. The resultant human error rates (HER) provide the first real systems bases of comparison for the existing derived and/or best judgement equivalent set of such rates or probabilities. These calculated error rates also provide the first real indication of human performance as it relates directly to specific tasks in nuclear plants. This work of developing specific HERs is both an extension of and an outgrowth of the generic HERs developed for safety system pumps and valves as reported in NUREG/CR-1880.

Luckas, W.J. Jr.; Lettieri, V.; Hall, R.E.

1982-02-01T23:59:59.000Z

134

Spent nuclear fuel sampling strategy  

SciTech Connect (OSTI)

This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation.

Bergmann, D.W.

1995-02-08T23:59:59.000Z

135

Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity  

E-Print Network [OSTI]

1 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity of Photovoltaic Electricity #12;IEA-PVPS-TASK 12 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity INTERNATIONAL ENERGY AGENCY PHOTOVOLTAIC POWER SYSTEMS PROGRAMME Methodology

136

Correlated errors can lead to better performance of quantum codes  

E-Print Network [OSTI]

A formulation for evaluating the performance of quantum error correcting codes for a general error model is presented. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. We classify correlated errors using the system-bath interaction: local versus nonlocal and two-body versus many-body interactions. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery. We also find this timing to be an important factor in the design of a coding system for achieving higher fidelities.

A. Shabani

2008-03-06T23:59:59.000Z

137

A Comparative Study into Architecture-Based Safety Evaluation Methodologies using AADL's Error Annex and Failure Propagation Models  

E-Print Network [OSTI]

and Effects Analysis (FMEA) [25] are used to create evidence that the system fulfils its safety requirements design phase) are used to automatically produce Fault Trees and FMEA tables based on an architecture

Han, Jun

138

Sampling box  

DOE Patents [OSTI]

An air sampling box that uses a slidable filter tray and a removable filter cartridge to allow for the easy replacement of a filter which catches radioactive particles is disclosed.

Phillips, Terrance D. (617 Chestnut Ct., Aiken, SC 29803); Johnson, Craig (100 Midland Rd., Oak Ridge, TN 37831-0895)

2000-01-01T23:59:59.000Z

139

Biopower Report Presents Methodology for Assessing the Value...  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

Biopower Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of...

140

aij projects methodology: Topics by E-print Network  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Last Page Topic Index 1 A New Project Execution Methodology; Integrating Project Management Principles with Quality Project Execution Methodologies University of Kansas - KU...

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

STEPS: A Grid Search Methodology for Optimized Peptide Identification...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

A Grid Search Methodology for Optimized Peptide Identification Filtering of MSMS Database Search Results. STEPS: A Grid Search Methodology for Optimized Peptide Identification...

142

Hydrogen Program Goal-Setting Methodologies Report to Congress...  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

Hydrogen Program Goal-Setting Methodologies Report to Congress Hydrogen Program Goal-Setting Methodologies Report to Congress This Report to Congress, published in August 2006,...

143

Modeling of Diesel Exhaust Systems: A methodology to better simulate...  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Discussed...

144

New Methodologies for Analysis of Premixed Charge Compression...  

Broader source: Energy.gov (indexed) [DOE]

New Methodologies for Analysis of Premixed Charge Compression Ignition Engines New Methodologies for Analysis of Premixed Charge Compression Ignition Engines Presentation given at...

145

Barr Engineering Statement of Methodology Rosemount Wind Turbine...  

Office of Environmental Management (EM)

Barr Engineering Statement of Methodology Rosemount Wind Turbine Simulations by Truescape Visual Reality, DOEEA-1791 (May 2010) Barr Engineering Statement of Methodology Rosemount...

146

Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization  

SciTech Connect (OSTI)

Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

LaMar, E; Hamann, B; Joy, K I

2001-10-16T23:59:59.000Z

147

Critical infrastructure systems of systems assessment methodology.  

SciTech Connect (OSTI)

Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

2006-10-01T23:59:59.000Z

148

Quantum Error Correcting Subsystem Codes From Two Classical Linear Codes  

E-Print Network [OSTI]

The essential insight of quantum error correction was that quantum information can be protected by suitably encoding this quantum information across multiple independently erred quantum systems. Recently it was realized that, since the most general method for encoding quantum information is to encode it into a subsystem, there exists a novel form of quantum error correction beyond the traditional quantum error correcting subspace codes. These new quantum error correcting subsystem codes differ from subspace codes in that their quantum correcting routines can be considerably simpler than related subspace codes. Here we present a class of quantum error correcting subsystem codes constructed from two classical linear codes. These codes are the subsystem versions of the quantum error correcting subspace codes which are generalizations of Shor's original quantum error correcting subspace codes. For every Shor-type code, the codes we present give a considerable savings in the number of stabilizer measurements needed in their error recovery routines.

Dave Bacon; Andrea Casaccino

2006-10-17T23:59:59.000Z

149

Reply To "Comment on 'Quantum Convolutional Error-Correcting Codes' "  

E-Print Network [OSTI]

In their comment, de Almedia and Palazzo \\cite{comment} discovered an error in my earlier paper concerning the construction of quantum convolutional codes (quant-ph/9712029). This error can be repaired by modifying the method of code construction.

H. F. Chau

2005-06-02T23:59:59.000Z

150

Hardware-efficient autonomous quantum error correction  

E-Print Network [OSTI]

We propose a new method to autonomously correct for errors of a logical qubit induced by energy relaxation. This scheme encodes the logical qubit as a multi-component superposition of coherent states in a harmonic oscillator, more specifically a cavity mode. The sequences of encoding, decoding and correction operations employ the non-linearity provided by a single physical qubit coupled to the cavity. We layout in detail how to implement these operations in a practical system. This proposal directly addresses the task of building a hardware-efficient and technically realizable quantum memory.

Zaki Leghtas; Gerhard Kirchmair; Brian Vlastakis; Robert Schoelkopf; Michel Devoret; Mazyar Mirrahimi

2013-01-16T23:59:59.000Z

151

Error Analysis in Nuclear Density Functional Theory  

E-Print Network [OSTI]

Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.

Nicolas Schunck; Jordan D. McDonnell; Jason Sarich; Stefan M. Wild; Dave Higdon

2014-07-11T23:59:59.000Z

152

Edison Trouble Shooting and Error Messages  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May JunDatastreamsmmcrcalgovInstrumentsruc DocumentationP-Series toESnet4: Networking for the Future ofTrouble Shooting and Error Messages

153

Clustered Error Correction of Codeword-Stabilized Quantum Codes  

E-Print Network [OSTI]

Codeword stabilized (CWS) codes are a general class of quantum codes that includes stabilizer codes and many families of non-additive codes with good parameters. For such a non-additive code correcting all t-qubit errors, we propose an algorithm that employs a single measurement to test all errors located on a given set of t qubits. Compared with exhaustive error screening, this reduces the total number of measurements required for error recovery by a factor of about 3^t.

Yunfan Li; Ilya Dumer; Leonid P. Pryadko

2010-03-08T23:59:59.000Z

154

Standard errors of parameter estimates in the ETAS model  

E-Print Network [OSTI]

1 Standard errors of parameter estimates in the ETAS model Abstract Point process models of seismic catalogs and in short- term earthquake forecasting. The standard errors of parameter estimates of conventional standard error estimates based on the Hessian matrix of the log- likelihood function of the ETAS

Schoenberg, Frederic Paik (Rick)

155

Instruction Guide Paying Vendors: Checking Vouchers for Errors  

E-Print Network [OSTI]

Instruction Guide Paying Vendors: Checking Vouchers for Errors and Match Exceptions Updated the Pay Terms on the voucher · Changing the Vendor or Vendor address · Vouchering a PO that have receipts link or the Error Summary tab. #12;Instruction Guide Paying Vendors: Checking Vouchers for Errors

Watson, Craig A.

156

Systematic Comparison of Operating Reserve Methodologies: Preprint  

SciTech Connect (OSTI)

Operating reserve requirements are a key component of modern power systems, and they contribute to maintaining reliable operations with minimum economic impact. No universal method exists for determining reserve requirements, thus there is a need for a thorough study and performance comparison of the different existing methodologies. Increasing penetrations of variable generation (VG) on electric power systems are posed to increase system uncertainty and variability, thus the need for additional reserve also increases. This paper presents background information on operating reserve and its relationship to VG. A consistent comparison of three methodologies to calculate regulating and flexibility reserve in systems with VG is performed.

Ibanez, E.; Krad, I.; Ela, E.

2014-04-01T23:59:59.000Z

157

On-line Self Error Detection with Equal Protection Against All Errors Mark G. Karpovsky, Konrad J. Kulikowski, Zhen Wang  

E-Print Network [OSTI]

On-line Self Error Detection with Equal Protection Against All Errors Mark G. Karpovsky, Konrad J and storage. We also present several design techniques for memories with self-error-detection based on the pro. The proposed robust codes require slightly larger overhead than standard and widely-used linear codes

Karpovsky, Mark

158

E791 DATA ACQUISITION SYSTEM Error reports received ; no new errors reported  

E-Print Network [OSTI]

of events written to tape. 18 #12; Error and Status Displays Mailbox For Histogram Requests Vax­online Event Display VAX 11 / 780 Event Reconstruction Event Display Detector Monitoring 3 VAX Workstations 42 EXABYTE of the entire E791 DA system. The VAX 11/780 was the user interface to the VME part of the system, via the DA

Fermi National Accelerator Laboratory

159

Case Study/ Ground Water Sustainability: Methodology and  

E-Print Network [OSTI]

, or the lack thereof, of ground water flow systems driven by similar hydrogeologic and economic conditionsCase Study/ Ground Water Sustainability: Methodology and Application to the North China Plain of a ground water flow system in the North China Plain (NCP) subject to severe overexploitation and rapid

Zheng, Chunmiao

160

Methodology in Biological Game Simon M. Huttegger  

E-Print Network [OSTI]

;Huttegger and Zollman Methodology in Biological Game Theory ESS Method Describe a game Find all the stable states (ESS) If there is only one, conclude this one is evolutionarily significant #12;Huttegger An Evolutionarily Stable Strategy (ESS) Pooling equilibrium Not an ESS Hybrid equilibrium Not an ESS #12;Huttegger

Zollman, Kevin

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

An International Journal for Epistemology, Methodology and  

E-Print Network [OSTI]

1 23 Synthese An International Journal for Epistemology, Methodology and Philosophy of Science ISSN on . . . : reasoning with infinite diagrams Solomon Feferman #12;1 23 Your article is protected by copyright and all:371­386 DOI 10.1007/s11229-011-9985-6 And so on ...: reasoning with infinite diagrams Solomon Feferman

Feferman, Solomon

162

Optimization Material Distribution methodology: Some electromagnetic examples  

E-Print Network [OSTI]

730 1 Optimization Material Distribution methodology: Some electromagnetic examples P. Boissoles, H. Ben Ahmed, M. Pierre, B. Multon Abstract--In this paper, a new approach towards Optimization Material to be highly adaptive to various kinds of electromagnetic actuator optimization approaches. Several optimal

Paris-Sud XI, Université de

163

Supervised classification of microbiota mitigates mislabeling errors  

E-Print Network [OSTI]

of DNA sequencing technologies and concomitant advances in bioinfor- matics methods are revolutionizing to be useless, but what if only a few of the labels are wrong? After intentionally mislabeling samples

Kelley, Scott

164

Analysis of Errors in a Special Perturbations Satellite Orbit Propagator  

SciTech Connect (OSTI)

We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

Beckerman, M.; Jones, J.P.

1999-02-01T23:59:59.000Z

165

Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks  

E-Print Network [OSTI]

We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.

Li, Ruoxin

2010-01-01T23:59:59.000Z

166

Evaluating and Minimizing Distributed Cavity Phase Errors in Atomic Clocks  

E-Print Network [OSTI]

We perform 3D finite element calculations of the fields in microwave cavities and analyze the distributed cavity phase errors of atomic clocks that they produce. The fields of cylindrical cavities are treated as an azimuthal Fourier series. Each of the lowest components produces clock errors with unique characteristics that must be assessed to establish a clock's accuracy. We describe the errors and how to evaluate them. We prove that sharp structures in the cavity do not produce large frequency errors, even at moderately high powers, provided the atomic density varies slowly. We model the amplitude and phase imbalances of the feeds. For larger couplings, these can lead to increased phase errors. We show that phase imbalances produce a novel distributed cavity phase error that depends on the cavity detuning. We also design improved cavities by optimizing the geometry and tuning the mode spectrum so that there are negligible phase variations, allowing this source of systematic error to be dramatically reduced.

Ruoxin Li; Kurt Gibble

2010-08-09T23:59:59.000Z

167

Quantum Error Correction with magnetic molecules  

E-Print Network [OSTI]

Quantum algorithms often assume independent spin qubits to produce trivial $|\\uparrow\\rangle=|0\\rangle$, $|\\downarrow\\rangle=|1\\rangle$ mappings. This can be unrealistic in many solid-state implementations with sizeable magnetic interactions. Here we show that the lower part of the spectrum of a molecule containing three exchange-coupled metal ions with $S=1/2$ and $I=1/2$ is equivalent to nine electron-nuclear qubits. We derive the relation between spin states and qubit states in reasonable parameter ranges for the rare earth $^{159}$Tb$^{3+}$ and for the transition metal Cu$^{2+}$, and study the possibility to implement Shor's Quantum Error Correction code on such a molecule. We also discuss recently developed molecular systems that could be adequate from an experimental point of view.

José J. Baldoví; Salvador Cardona-Serra; Juan M. Clemente-Juan; Luis Escalera-Moreno; Alejandro Gaita-Arińo; Guillermo Mínguez Espallargas

2014-08-22T23:59:59.000Z

168

Graphical Quantum Error-Correcting Codes  

E-Print Network [OSTI]

We introduce a purely graph-theoretical object, namely the coding clique, to construct quantum errorcorrecting codes. Almost all quantum codes constructed so far are stabilizer (additive) codes and the construction of nonadditive codes, which are potentially more efficient, is not as well understood as that of stabilizer codes. Our graphical approach provides a unified and classical way to construct both stabilizer and nonadditive codes. In particular we have explicitly constructed the optimal ((10,24,3)) code and a family of 1-error detecting nonadditive codes with the highest encoding rate so far. In the case of stabilizer codes a thorough search becomes tangible and we have classified all the extremal stabilizer codes up to 8 qubits.

Sixia Yu; Qing Chen; C. H. Oh

2007-09-12T23:59:59.000Z

169

Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics  

E-Print Network [OSTI]

, Aniruddha Datta Guy L. Curry Head of Department, Costas N. Georghiades December 2010 Major Subject: Electrical Engineering iii ABSTRACT Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133 x LIST OF TABLES TABLE Page I Minimum sample size, n, (n0 = n1 = n) for desired (n;0:5) in univariate case. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 II Genes selected using the validity-goodness model selection...

Zollanvari, Amin

2012-02-14T23:59:59.000Z

170

Methodology and Process for Condition Assessment at Existing Hydropower Plants  

SciTech Connect (OSTI)

Hydropower Advancement Project was initiated by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy to develop and implement a systematic process with a standard methodology to identify the opportunities of performance improvement at existing hydropower facilities and to predict and trend the overall condition and improvement opportunity within the U.S. hydropower fleet. The concept of performance for the HAP focuses on water use efficiency how well a plant or individual unit converts potential energy to electrical energy over a long-term averaging period of a year or more. The performance improvement involves not only optimization of plant dispatch and scheduling but also enhancement of efficiency and availability through advanced technology and asset upgrades, and thus requires inspection and condition assessment for equipment, control system, and other generating assets. This paper discusses the standard methodology and process for condition assessment of approximately 50 nationwide facilities, including sampling techniques to ensure valid expansion of the 50 assessment results to the entire hydropower fleet. The application and refining process and the results from three demonstration assessments are also presented in this paper.

Zhang, Qin Fen [ORNL] [ORNL; Smith, Brennan T [ORNL] [ORNL; Cones, Marvin [Mesa Associates, Inc.] [Mesa Associates, Inc.; March, Patrick [Hydro Performance Processes, Inc.] [Hydro Performance Processes, Inc.; Dham, Rajesh [U.S. Department of Energy] [U.S. Department of Energy; Spray, Michael [New West Technologies, LLC.] [New West Technologies, LLC.

2012-01-01T23:59:59.000Z

171

Theoretical analysis of reflected ray error from surface slope error and their application to the solar concentrated collector  

E-Print Network [OSTI]

Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.

Huang, Weidong

2011-01-01T23:59:59.000Z

172

Error-eliminating rapid ultrasonic firing  

DOE Patents [OSTI]

A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the confidence'' of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

Borenstein, J.; Koren, Y.

1993-08-24T23:59:59.000Z

173

Error-eliminating rapid ultrasonic firing  

DOE Patents [OSTI]

A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the "confidence" of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

Borenstein, Johann (Ann Arbor, MI); Koren, Yoram (Ann Arbor, MI)

1993-08-24T23:59:59.000Z

174

Investigating surety methodologies for cognitive systems.  

SciTech Connect (OSTI)

Advances in cognitive science provide a foundation for new tools that promise to advance human capabilities with significant positive impacts. As with any new technology breakthrough, associated technical and non-technical risks are involved. Sandia has mitigated both technical and non-technical risks by applying advanced surety methodologies in such areas as nuclear weapons, nuclear reactor safety, nuclear materials transport, and energy systems. In order to apply surety to the development of cognitive systems, we must understand the concepts and principles that characterize the certainty of a system's operation as well as the risk areas of cognitive sciences. This SAND report documents a preliminary spectrum of risks involved with cognitive sciences, and identifies some surety methodologies that can be applied to potentially mitigate such risks. Some potential areas for further study are recommended. In particular, a recommendation is made to develop a cognitive systems epistemology framework for more detailed study of these risk areas and applications of surety methods and techniques.

Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); Peercy, David Eugene; Mills, Kristy (University of New Mexico, Albuquerque, NM); Caldera, Eva (University of New Mexico, Albuquerque, NM)

2006-11-01T23:59:59.000Z

175

A planning methodology for arterial streets  

E-Print Network [OSTI]

-of-Service Guidelines 4 Sensitivity of Characteristic Input Variables 24 5 Suggested Default Values for use with the Florida Planning Methodology . 25 6 Summary of Characteristic Variables and Operational Conditions 34 7 Comparison of Measured and Predicted Results... for Incremental v/c Ratios 14 Transportation and Development Land Use Cycle 18 General Analytical Format of the Florida Planning Procedure 27 Tabular LOS Output of the ART TAB Arterial Planning Program 28 Frequency of HCM Classifications Among Arterial...

Williams, Marc Daryl

1991-01-01T23:59:59.000Z

176

Deterministic treatment of model error in geophysical data assimilation  

E-Print Network [OSTI]

This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

Carrassi, Alberto

2015-01-01T23:59:59.000Z

177

Quantum root-mean-square error and measurement uncertainty relations  

E-Print Network [OSTI]

Recent years have witnessed a controversy over Heisenberg's famous error-disturbance relation. Here we resolve the conflict by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. We discuss two approaches to adapting the classic notion of root-mean-square error to quantum measurements. One is based on the concept of noise operator; its natural operational content is that of a mean deviation of the values of two observables measured jointly, and thus its applicability is limited to cases where such joint measurements are available. The second error measure quantifies the differences between two probability distributions obtained in separate runs of measurements and is of unrestricted applicability. We show that there are no nontrivial unconditional joint-measurement bounds for {\\em state-dependent} errors in the conceptual framework discussed here, while Heisenberg-type measurement uncertainty relations for {\\em state-independent} errors have been proven.

Paul Busch; Pekka Lahti; Reinhard F Werner

2014-10-10T23:59:59.000Z

178

Parallel Worldline Numerics: Implementation and Error Analysis  

E-Print Network [OSTI]

We give an overview of the worldline numerics technique, and discuss the parallel CUDA implementation of a worldline numerics algorithm. In the worldline numerics technique, we wish to generate an ensemble of representative closed-loop particle trajectories, and use these to compute an approximate average value for Wilson loops. We show how this can be done with a specific emphasis on cylindrically symmetric magnetic fields. The fine-grained, massive parallelism provided by the GPU architecture results in considerable speedup in computing Wilson loop averages. Furthermore, we give a brief overview of uncertainty analysis in the worldline numerics method. There are uncertainties from discretizing each loop, and from using a statistical ensemble of representative loops. The former can be minimized so that the latter dominates. However, determining the statistical uncertainties is complicated by two subtleties. Firstly, the distributions generated by the worldline ensembles are highly non-Gaussian, and so the standard error in the mean is not a good measure of the statistical uncertainty. Secondly, because the same ensemble of worldlines is used to compute the Wilson loops at different values of $T$ and $x_\\mathrm{ cm}$, the uncertainties associated with each computed value of the integrand are strongly correlated. We recommend a form of jackknife analysis which deals with both of these problems.

Dan Mazur; Jeremy S. Heyl

2014-07-28T23:59:59.000Z

179

Homological Error Correction: Classical and Quantum Codes  

E-Print Network [OSTI]

We prove several theorems characterizing the existence of homological error correction codes both classically and quantumly. Not every classical code is homological, but we find a family of classical homological codes saturating the Hamming bound. In the quantum case, we show that for non-orientable surfaces it is impossible to construct homological codes based on qudits of dimension $D>2$, while for orientable surfaces with boundaries it is possible to construct them for arbitrary dimension $D$. We give a method to obtain planar homological codes based on the construction of quantum codes on compact surfaces without boundaries. We show how the original Shor's 9-qubit code can be visualized as a homological quantum code. We study the problem of constructing quantum codes with optimal encoding rate. In the particular case of toric codes we construct an optimal family and give an explicit proof of its optimality. For homological quantum codes on surfaces of arbitrary genus we also construct a family of codes asymptotically attaining the maximum possible encoding rate. We provide the tools of homology group theory for graphs embedded on surfaces in a self-contained manner.

H. Bombin; M. A. Martin-Delgado

2006-05-10T23:59:59.000Z

180

Temperature-dependent errors in nuclear lattice simulations  

E-Print Network [OSTI]

We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local "well-tempered" lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.

Dean Lee; Richard Thomson

2007-01-17T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments  

SciTech Connect (OSTI)

Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

Pevey, Ronald E.

2005-09-15T23:59:59.000Z

182

Optimized Learning with Bounded Error for Feedforward Neural Networks  

E-Print Network [OSTI]

Optimized Learning with Bounded Error for Feedforward Neural Networks A. Alessandri, M. Sanguineti-based learnings. A. Alessandri is with the Naval Automatio

Maggiore, Manfredi

183

Nonlinear local error bounds via a change of metric  

E-Print Network [OSTI]

Oct 23, 2014 ... Abstract: In this work, we improve the approach of Corvellec-Motreanu to nonlinear error bounds for lowersemicontinuous functions on ...

Dominique Azé

2014-10-23T23:59:59.000Z

184

New Fractional Error Bounds for Polynomial Systems with ...  

E-Print Network [OSTI]

Our major result extends the existing error bounds from the system involving only a ... linear complementarity systems with polynomial data as well as high-order ...

2014-07-27T23:59:59.000Z

185

E-Print Network 3.0 - avoid vocal errors Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

TO THE AUDIO BROADCASTS OF PREDATOR VOCALIZATIONS BY EIGHT SYMPATRIC PRIMATES IN SURINAME, SOUTH AMERICA Summary: RESPONSES TO THE AUDIO BROADCASTS OF PREDATOR VOCALIZATIONS...

186

E-Print Network 3.0 - amino acid metabolism inborn errors Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

95 4.4.3 Metabolic Objective under Different Amino Acid Supplementation 96 4... injury (Lee et al. 2000a) and to investigate insulin and ... Source: Rutgers University,...

187

E-Print Network 3.0 - automatic global error Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Biology and Medicine 98 Application of genetic algorithms to infinite impulse response adaptive filters Stuart J Flockton and Michael S White Summary: response adaptive filters...

188

E-Print Network 3.0 - account positioning errors Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Collection: Engineering 26 I:AuthorizationAUTHORZN2008FormsDivisionDivFISB62008.doc 2 DIVISIONAL ACCESS: REQUEST FOR SETUP OR CHANGE Summary: to identify...

189

E-Print Network 3.0 - axonal pathfinding errors Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

University; Horvitz, H. Robert - Department of Biology, Massachusetts Institute of Technology (MIT) Collection: Biology and Medicine 3 Oxygen levels affect axon guidance and...

190

Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Mesurements  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmospheric Optical Depth (AOD)ProductssondeadjustsondeadjustAbout theOFFICEAmes Laboratory Site| Department ofAn|Oil

191

A proposed methodology for computational fluid dynamics code verification, calibration, and validation  

SciTech Connect (OSTI)

Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

Aeschliman, D.P.; Oberkampf, W.L.; Blottner, F.G.

1995-07-01T23:59:59.000Z

192

Integrated Scenario-based Design Methodology for Collaborative Technology Innovation  

E-Print Network [OSTI]

information technology innovation with an end-to-end Human and Social Sciences assistance. This methodologyIntegrated Scenario-based Design Methodology for Collaborative Technology Innovation Fabrice Forest Technological innovation often requires large scale collaborative partnership between many heterogeneous

Paris-Sud XI, Université de

193

advanced diagnostic methodology: Topics by E-print Network  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

and Ecology Websites Summary: 1 Methodology Guidelines on Life Cycle Assessment of Photovoltaic Electricity Report IEA-PVPS T12-03:2011 12;IEA-PVPS-TASK 12 Methodology...

194

A methodology for forecasting carbon dioxide flooding performance  

E-Print Network [OSTI]

A methodology was developed for forecasting carbon dioxide (CO2) flooding performance quickly and reliably. The feasibility of carbon dioxide flooding in the Dollarhide Clearfork "AB" Unit was evaluated using the methodology. This technique is very...

Marroquin Cabrera, Juan Carlos

1998-01-01T23:59:59.000Z

195

Novel Optimization Methodology for Welding Process/Consumable Integration  

SciTech Connect (OSTI)

Advanced materials are being developed to improve the energy efficiency of many industries of future including steel, mining, and chemical, as well as, US infrastructures including bridges, pipelines and buildings. Effective deployment of these materials is highly dependent upon the development of arc welding technology. Traditional welding technology development is slow and often involves expensive and time-consuming trial and error experimentation. The reason for this is the lack of useful predictive tools that enable welding technology development to keep pace with the deployment of new materials in various industrial sectors. Literature reviews showed two kinds of modeling activities. Academic and national laboratory efforts focus on developing integrated weld process models by employing the detailed scientific methodologies. However, these models are cumbersome and not easy to use. Therefore, these scientific models have limited application in real-world industrial conditions. On the other hand, industrial users have relied on simple predictive models based on analytical and empirical equations to drive their product development. The scopes of these simple models are limited. In this research, attempts were made to bridge this gap and provide the industry with a computational tool that combines the advantages of both approaches. This research resulted in the development of predictive tools which facilitate the development of optimized welding processes and consumables. The work demonstrated that it is possible to develop hybrid integrated models for relating the weld metal composition and process parameters to the performance of welds. In addition, these tools can be deployed for industrial users through user friendly graphical interface. In principle, the welding industry users can use these modular tools to guide their welding process parameter and consumable composition selection. It is hypothesized that by expanding these tools throughout welding industry, substantial energy savings can be made. Savings are expected to be even greater in the case of new steels, which will require extensive mapping over large experimental ranges of parameters such as voltage, current, speed, heat input and pre-heat.

Quintana, Marie A; DebRoy, Tarasankar; Vitek, John; Babu, Suresh

2006-01-15T23:59:59.000Z

196

Waste Package Component Design Methodology Report  

SciTech Connect (OSTI)

This Executive Summary provides an overview of the methodology being used by the Yucca Mountain Project (YMP) to design waste packages and ancillary components. This summary information is intended for readers with general interest, but also provides technical readers a general framework surrounding a variety of technical details provided in the main body of the report. The purpose of this report is to document and ensure appropriate design methods are used in the design of waste packages and ancillary components (the drip shields and emplacement pallets). The methodology includes identification of necessary design inputs, justification of design assumptions, and use of appropriate analysis methods, and computational tools. This design work is subject to ''Quality Assurance Requirements and Description''. The document is primarily intended for internal use and technical guidance for a variety of design activities. It is recognized that a wide audience including project management, the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission, and others are interested to various levels of detail in the design methods and therefore covers a wide range of topics at varying levels of detail. Due to the preliminary nature of the design, readers can expect to encounter varied levels of detail in the body of the report. It is expected that technical information used as input to design documents will be verified and taken from the latest versions of reference sources given herein. This revision of the methodology report has evolved with changes in the waste package, drip shield, and emplacement pallet designs over many years and may be further revised as the design is finalized. Different components and analyses are at different stages of development. Some parts of the report are detailed, while other less detailed parts are likely to undergo further refinement. The design methodology is intended to provide designs that satisfy the safety and operational requirements of the YMP. Four waste package configurations have been selected to illustrate the application of the methodology during the licensing process. These four configurations are the 21-pressurized water reactor absorber plate waste package (21-PWRAP), the 44-boiling water reactor waste package (44-BWR), the 5 defense high-level radioactive waste (HLW) DOE spent nuclear fuel (SNF) codisposal short waste package (5-DHLWDOE SNF Short), and the naval canistered SNF long waste package (Naval SNF Long). Design work for the other six waste packages will be completed at a later date using the same design methodology. These include the 24-boiling water reactor waste package (24-BWR), the 21-pressurized water reactor control rod waste package (21-PWRCR), the 12-pressurized water reactor waste package (12-PWR), the 5 defense HLW DOE SNF codisposal long waste package (5-DHLWDOE SNF Long), the 2 defense HLW DOE SNF codisposal waste package (2-MC012-DHLW), and the naval canistered SNF short waste package (Naval SNF Short). This report is only part of the complete design description. Other reports related to the design include the design reports, the waste package system description documents, manufacturing specifications, and numerous documents for the many detailed calculations. The relationships between this report and other design documents are shown in Figure 1.

D.C. Mecham

2004-07-12T23:59:59.000Z

197

Stewart and Khosla: The Chimera Methodology 1 FINAL DRAFT  

E-Print Network [OSTI]

Stewart and Khosla: The Chimera Methodology 1 FINAL DRAFT THE CHIMERA METHODOLOGY: DESIGNING 15213 pkk@ri.cmu.edu Abstract: The Chimera Methodology is a software engineering paradigm that enables the objects have been developed and in­ corporated into the Chimera Real­Time Operating System. Techniques

198

Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination  

SciTech Connect (OSTI)

We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states ?{sub 1}, …, ?{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(?{sub 1}, …, ?{sub r}), as recently introduced by Nussbaum and Szko?a in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j

Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk [Department of Mathematics, Royal Holloway University of London, Egham TW20 0EX (United Kingdom); Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent (Belgium); Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com [Física Teňrica: Informació i Fenomens Quŕntics, Universitat Autňnoma de Barcelona, ES-08193 Bellaterra, Barcelona (Spain); Mathematical Institute, Budapest University of Technology and Economics, Egry József u 1., Budapest 1111 (Hungary)

2014-10-15T23:59:59.000Z

199

Project Management Methodology | Department of Energy  

Office of Environmental Management (EM)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary) "of Energy Power Systems Engineering Research and Development (PSEEnergyProject Management Methodology

200

Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1  

SciTech Connect (OSTI)

Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

Oztunali, O.I.; Roles, G.W.

1986-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

Uniform and optimal error estimates of an exponential wave ...  

E-Print Network [OSTI]

of the error propagation, cut-off of the nonlinearity, and the energy method. ...... gives Lemma 3.4 for the local truncation error, which is of spectral order in ... estimates, we adopt a strategy similar to the finite difference method [4] (cf. diagram.

2014-05-01T23:59:59.000Z

202

PROPAGATION OF ERRORS IN SPATIAL ANALYSIS Peter P. Siska  

E-Print Network [OSTI]

, the conversion of data from analog to digital form used to be an extremely time-consuming process. At present process then the resulting error is inflated up to 20 percent for each grid cell of the final map. The magnitude of errors naturally increases with an addition of every new layer entering the overlay process

Hung, I-Kuai

203

ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK  

E-Print Network [OSTI]

ERROR VISUALIZATION FOR TANDEM ACOUSTIC MODELING ON THE AURORA TASK Manuel J. Reyes. This structure reduces the error rate on the Aurora 2 noisy English digits task by more than 50% compared development of tandem systems showed an improvement in the performance on the Aurora task [2] of these systems

Ellis, Dan

204

Entanglement and Quantum Error Correction with Superconducting Qubits  

E-Print Network [OSTI]

Entanglement and Quantum Error Correction with Superconducting Qubits A Dissertation Presented David Reed All rights reserved. #12;Entanglement and Quantum Error Correction with Superconducting is to use superconducting quantum bits in the circuit quantum electro- dynamics (cQED) architecture. There

205

Grid-scale Fluctuations and Forecast Error in Wind Power  

E-Print Network [OSTI]

The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).

Bel, G; Toots, M; Bandi, M M

2015-01-01T23:59:59.000Z

206

Grid-scale Fluctuations and Forecast Error in Wind Power  

E-Print Network [OSTI]

The fluctuations in wind power entering an electrical grid (Irish grid) were analyzed and found to exhibit correlated fluctuations with a self-similar structure, a signature of large-scale correlations in atmospheric turbulence. The statistical structure of temporal correlations for fluctuations in generated and forecast time series was used to quantify two types of forecast error: a timescale error ($e_{\\tau}$) that quantifies the deviations between the high frequency components of the forecast and the generated time series, and a scaling error ($e_{\\zeta}$) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations of the generated power. With no $a$ $priori$ knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ($e_{\\tau}$) and the scaling error ($e_{\\zeta}$).

G. Bel; C. P. Connaughton; M. Toots; M. M. Bandi

2015-03-29T23:59:59.000Z

207

A Sensing Error Aware MAC Protocol for Cognitive Radio Networks  

E-Print Network [OSTI]

Cognitive radios (CR) are intelligent radio devices that can sense the radio environment and adapt to changes in the radio environment. Spectrum sensing and spectrum access are the two key CR functions. In this paper, we present a spectrum sensing error aware MAC protocol for a CR network collocated with multiple primary networks. We explicitly consider both types of sensing errors in the CR MAC design, since such errors are inevitable for practical spectrum sensors and more important, such errors could have significant impact on the performance of the CR MAC protocol. Two spectrum sensing polices are presented, with which secondary users collaboratively sense the licensed channels. The sensing policies are then incorporated into p-Persistent CSMA to coordinate opportunistic spectrum access for CR network users. We present an analysis of the interference and throughput performance of the proposed CR MAC, and find the analysis highly accurate in our simulation studies. The proposed sensing error aware CR MAC p...

Hu, Donglin

2011-01-01T23:59:59.000Z

208

Laboratory and field-scale test methodology for reliable characterization of solidified/stabilized hazardous wastes  

SciTech Connect (OSTI)

A methodology for flow through leach testing is proposed and discussed and preliminary testing using strontium doped cement based S/S samples is presented. The complementary and necessary characterization of the S/S matrix before and after testing is discussed and placed in perspective to the total evaluation of the laboratory-field scale leach testing for predicting long term performance and S/S technology design and improvement.

Gray, K.E.; Holder, J. [Univ. of Texas, Austin, TX (United States). Center for Earth Sciences and Engineering; Mollah, M.Y.A.; Hess, T.R.; Vempati, R.K.; Cocke, D.L. [Lamar Univ., Beaumont, TX (United States)

1995-12-31T23:59:59.000Z

209

DIGITAL TECHNOLOGY BUSINESS CASE METHODOLOGY GUIDE & WORKBOOK  

SciTech Connect (OSTI)

Performance advantages of the new digital technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. This Business Case Methodology approaches building a business case for a particular technology or suite of technologies by detailing how they impact an operator in one or more of the three following areas: Labor Costs, Non-Labor Costs, and Key Performance Indicators (KPIs). Key to those impacts will be identifying where the savings are “harvestable,” meaning they result in an actual reduction in headcount and/or cost. The report consists of a Digital Technology Business Case Methodology Guide and an accompanying spreadsheet workbook that will enable the user to develop a business case.

Thomas, Ken; Lawrie, Sean; Hart, Adam; Vlahoplus, Chris

2014-09-01T23:59:59.000Z

210

Methodology for Defining Gap Areas between Course-over-ground Locations  

SciTech Connect (OSTI)

Finding all areas that lie outside some distance d from a polyline is a problem with many potential applications. This application of the Visual Sample Plan (VSP) software required finding all areas that were more than distance d from a set of existing paths (roads and trails) represented by polylines. An outer container polygon (known in VSP as a “sample area”) defines the extents of the area of interest. The term “gap area” was adopted for this project, but another useful term might be “negative coverage area.” The project required a polygon solution rather than a raster solution. The search for a general solution provided no results, so this methodology was developed

Wilson, John E.

2013-09-30T23:59:59.000Z

211

Logical Error Rate Scaling of the Toric Code  

E-Print Network [OSTI]

To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead -- the total number of physical qubits required to perform error correction.

Fern H. E. Watson; Sean D. Barrett

2014-09-26T23:59:59.000Z

212

Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint  

SciTech Connect (OSTI)

The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

Stynes, J. K.; Ihas, B.

2012-04-01T23:59:59.000Z

213

Wind Power Forecasting Error Distributions: An International Comparison; Preprint  

SciTech Connect (OSTI)

Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.

2012-09-01T23:59:59.000Z

214

Universal Framework for Quantum Error-Correcting Codes  

E-Print Network [OSTI]

We present a universal framework for quantum error-correcting codes, i.e., the one that applies for the most general quantum error-correcting codes. This framework is established on the group algebra, an algebraic notation for the nice error bases of quantum systems. The nicest thing about this framework is that we can characterize the properties of quantum codes by the properties of the group algebra. We show how it characterizes the properties of quantum codes as well as generates some new results about quantum codes.

Zhuo Li; Li-Juan Xing

2009-01-04T23:59:59.000Z

215

E-Print Network 3.0 - accident analysis methodology Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

no systematic trends were found in the ... Source: George Mason University, Center for Air Transportation Systems Research Collection: Energy Storage, Conversion and Utilization...

216

E-Print Network 3.0 - assessment lca methodology Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

a tool to investigate the environmental... applications as transistors and electromagnetic interference (EMI) shielding). Life cycle assessment (LCA... Life Cycle ... Source:...

217

E-Print Network 3.0 - ari methodology modeling Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Dec 20, 2006ARR 1 Overview of the Engineering Design and Analysis of the ARIES-CS Power Plant... Presented by A. Ren Raffray University of California, San Diego for the...

218

E-Print Network 3.0 - attack methodology analysis Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Antonio Collection: Computer Technologies and Information Sciences 29 The Hierarchical Threat Model of Routing Security for wireless Ad hoc Networks College of Electrical and...

219

DIESEL AEROSOL SAMPLING METHODOLOGY -CRC E-43 TECHNICAL SUMMARY AND CONCLUSIONS  

E-Print Network [OSTI]

in diameter, PM2.5) in the atmosphere (Health Effects Institute 2001, 2002a). These concerns apply to Diesel exhaust and are discussed in depth elsewhere (Health Effects Institute 1995, 2002b). Particulate emissions to the formation of the large number of nanoparticles. The Health Effects Institute and the investigators were

Minnesota, University of

220

E-Print Network 3.0 - alternative testing methodologies Sample...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Collection: Mathematics 86 Power-Aware Test Planning in the Early System-on-Chip Design Exploration Process Summary: that, by exploring different design and test alternatives,...

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

E-Print Network 3.0 - accident risks methodology Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Energy Storage, Conversion and Utilization ; Engineering 6 THE RELATIONSHIP BETWEEN TRAIN LENGTH AND ACCIDENT CAUSES AND RATES Summary: ABSTRACT Train accident rates are a...

222

E-Print Network 3.0 - atheana methodologies uma Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

de NPCs para o Jogo Estrada Real Digital Carlcio S. Cordeiro, Carlos Augusto P. de Sousa, Diogo S. Sampaio, Luiz Gustavo Summary: techniques. This paper presents the...

223

Diagnosing multiplicative error by lensing magnification of type Ia supernovae  

E-Print Network [OSTI]

Weak lensing causes spatially coherent fluctuations in flux of type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to measure accurately in auto correlation, its cross correlation with cosmic shear and galaxy distribution in overlapping area can be measured to significantly higher accuracy. Therefore these cross correlations can put useful constraint on multiplicative error, and the obtained constraint is free of cosmic variance in weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with $\\sim 1$ million SNe Ia that can be achieved by the proposed D2k survey with the LSST telescope (Zhan et al. 2008), multiplicative error of $\\sim 0.5\\%$ for source galaxies at $z_s\\sim 1$ can be detected and la...

Zhang, Pengjie

2015-01-01T23:59:59.000Z

224

YELLOW SEA ACOUSTIC UNCERTAINTY CAUSED BY HYDROGRAPHIC DATA ERROR  

E-Print Network [OSTI]

the littoral and blue waters. After a weapon platform has detected its targets, the sensors on torpedoes, bathymetry, bottom type, and sound speed profiles. Here, the effect of sound speed errors (i.e., hydrographic

Chu, Peter C.

225

Strontium-90 Error Discovered in Subcontract Laboratory Spreadsheet  

SciTech Connect (OSTI)

West Valley Demonstration Project health physicists and environment scientists discovered a series of errors in a subcontractor's spreadsheet being used to reduce data as part of their strontium-90 analytical process.

D. D. Brown A. S. Nagel

1999-07-31T23:59:59.000Z

226

Kinetic energy error in the NIMROD spheromak simulations Carl Sovinec  

E-Print Network [OSTI]

Kinetic energy error in the NIMROD spheromak simulations Carl Sovinec 10/25/00 Dmitri Ryutov at the ends (as in the spheromak simulations), it may lead to compression in a boundary layer.] The maximum

Sovinec, Carl

227

Error estimation and adaptive mesh refinement for aerodynamic flows  

E-Print Network [OSTI]

Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann1 and Paul Houston2 1 Institute of Aerodynamics and Flow Technology DLR (German Aerospace Center) Lilienthalplatz 7

Hartmann, Ralf

228

An Approximation Algorithm for Constructing Error Detecting Prefix ...  

E-Print Network [OSTI]

Sep 2, 2006 ... 2-bit Hamming prefix code problem. Our algorithm spends O(n log3 n) time to calculate a 2-bit. Hamming prefix code with an additive error of at ...

2006-09-02T23:59:59.000Z

229

A Priori Error Estimates for Some Discontinuous Galerkin Immersed ...  

E-Print Network [OSTI]

estimate in a mesh-dependant energy norm is derived, and this error ... 0 (Th), integrate both sides on each element K ? Th, and apply the Green's formula to.

2015-01-12T23:59:59.000Z

230

Secured Pace Web Server with Collaboration and Error Logging Capabilities  

E-Print Network [OSTI]

: Secure Sockets Layer (SSL) using the Java Secure Socket Extension (JSSE) API, error logging............................................................................................ 8 Chapter 3 Secure Pace Web Server with SSL........................................................... 29 3.1 Introduction to SSL

Tao, Lixin

231

Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint  

SciTech Connect (OSTI)

In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

Hodge, B. M.; Milligan, M.

2011-03-01T23:59:59.000Z

232

Servo control booster system for minimizing following error  

DOE Patents [OSTI]

A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

Wise, William L. (Mountain View, CA)

1985-01-01T23:59:59.000Z

233

Model Error Correction for Linear Methods in PET Neuroreceptor Measurements  

E-Print Network [OSTI]

Model Error Correction for Linear Methods in PET Neuroreceptor Measurements Hongbin Guo address: hguo1@asu.edu (Hongbin Guo) Preprint submitted to NeuroImage December 11, 2008 #12;reached. A new

Renaut, Rosemary

234

A Posteriori Error Estimation for - Department of Mathematics ...  

E-Print Network [OSTI]

Oct 19, 2013 ... the “correct” Hilbert space the true flux µ?1?×u lies in, to recover a ...... The error heat map shows that ZZ-patch recovery estimator leads.

Shuhao Cao supervised under Professor Zhiqiang Cai

2013-10-31T23:59:59.000Z

235

Quantum error correcting codes based on privacy amplification  

E-Print Network [OSTI]

Calderbank-Shor-Steane (CSS) quantum error-correcting codes are based on pairs of classical codes which are mutually dual containing. Explicit constructions of such codes for large blocklengths and with good error correcting properties are not easy to find. In this paper we propose a construction of CSS codes which combines a classical code with a two-universal hash function. We show, using the results of Renner and Koenig, that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. While the bit-flip errors can be decoded as efficiently as the classical code used, the problem of efficiently decoding the phase-flip errors remains open.

Zhicheng Luo

2008-08-10T23:59:59.000Z

236

Rateless and rateless unequal error protection codes for Gaussian channels  

E-Print Network [OSTI]

In this thesis we examine two different rateless codes and create a rateless unequal error protection code, all for the additive white Gaussian noise (AWGN) channel. The two rateless codes are examined through both analysis ...

Boyle, Kevin P. (Kevin Patrick)

2007-01-01T23:59:59.000Z

237

Systematic errors in current quantum state tomography tools  

E-Print Network [OSTI]

Common tools for obtaining physical density matrices in experimental quantum state tomography are shown here to cause systematic errors. For example, using maximum likelihood or least squares optimization for state reconstruction, we observe a systematic underestimation of the fidelity and an overestimation of entanglement. A solution for this problem can be achieved by a linear evaluation of the data yielding reliable and computational simple bounds including error bars.

Christian Schwemmer; Lukas Knips; Daniel Richart; Tobias Moroder; Matthias Kleinmann; Otfried Gühne; Harald Weinfurter

2014-07-22T23:59:59.000Z

238

An error correcting procedure for imperfect supervised, nonparametric classification  

E-Print Network [OSTI]

AN ERROR CORRECTING PROCEDJJRE FOR IMPERFECTI, Y SUPERVISED, NONPARAMETRIC CLASSIFICATION A Thesis by DENNIS RAY FERRELL Submitted to the Graduate College of Texas AAM University in partial fulfillment of the requirement for the degree...) (Head f Depart en ) (Member) (Member) PE y (Memb ei) (Member) August 1973 ABSTRACT An Error Correcting Procedure For Imperfectly Supervised, Nonparametric Classification (August 1973) Dennis Ray Ferrell, B. S. , I, omar University Directed by...

Ferrell, Dennis Ray

2012-06-07T23:59:59.000Z

239

Using doppler radar images to estimate aircraft navigational heading error  

DOE Patents [OSTI]

A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

2012-07-03T23:59:59.000Z

240

Methodology for determining criteria for storing spent fuel in air  

SciTech Connect (OSTI)

Dry storage in an air atmosphere is a method being considered for spent light water reactor (LWR) fuel as an alternative to storage in an inert gas environment. However, methods to predict fuel integrity based on oxidation behavior of the fuel first must be evaluated. The linear cumulative damage method has been proposed as a technique for defining storage criteria. Analysis of limited nonconstant temperature data on nonirradiated fuel samples indicates that this approach yields conservative results for a strictly decreasing-temperature history. On the other hand, the description of damage accumulation in terms of remaining life concepts provides a more general framework for making predictions of failure. Accordingly, a methodology for adapting remaining life concepts to UO/sub 2/ oxidation has been developed at Pacific Northwest Laboratory. Both the linear cumulative damage and the remaining life methods were used to predict oxidation results for spent fuel in which the temperature was decreased with time to simulate the temperature history in a dry storage cask. The numerical input to the methods was based on oxidation data generated with nonirradiated UO/sub 2/ pellets. The calculated maximum allowable storage temperatures are strongly dependent on the temperature-time profile and emphasize the conservatism inherent in the linear cumulative damage model. Additional nonconstant temperature data for spent fuel are needed to both validate the proposed methods and to predict temperatures applicable to actual spent fuel storage.

Reid, C.R.; Gilbert, E.R.

1986-11-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

Small sample multiple testing with application to cDNA microarray data  

E-Print Network [OSTI]

Many tests have been developed for comparing means in a two-sample scenario. Microarray experiments lead to thousands of such comparisons in a single study. Several multiple testing procedures are available to control experiment-wise error...

Hintze, Eric Poole

2006-10-30T23:59:59.000Z

242

E-Print Network 3.0 - age estimation Sample Search Results  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Search Sample search results for: age estimation Page: << < 1 2 3 4 5 > >> 1 Errors in fish ageing may result in biases in stock assessments and Summary: of fish- eries. Estimates...

243

Exploratory factor analysis in behavior genetics research: Factor recovery with small sample sizes  

E-Print Network [OSTI]

Results of a Monte Carlo study of exploratory factor analysis demonstrate that in studies characterized by low sample sizes the population factor structure can be adequately recovered if communalities are high, model error ...

Preacher, K. J.; MacCallum, R. C.

2002-01-01T23:59:59.000Z

244

Investigations in the Monte-Carlo sampling for interconnected power systems reliability evaluation  

E-Print Network [OSTI]

. The new approach keeps track of the probability of the unclassified states to keep the error under a specified fraction of the loss of load probability. There are two methods to select a sample state from a set, uniform sampling and proportional sampling...

Feng, Chun

1993-01-01T23:59:59.000Z

245

Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU  

E-Print Network [OSTI]

-GPGPU hardware in a controlled environment found no errors. However, our survey on Folding@home finds that carried out on over 50,000 GPUs on the Folding@home distributed computing network. MemtestG80

Pratt, Vaughan

246

UNFCCC-Consolidated baseline and monitoring methodology for landfill...  

Open Energy Info (EERE)

baseline and monitoring methodology for landfill gas project activities Jump to: navigation, search Tool Summary LAUNCH TOOL Name: UNFCCC-Consolidated baseline and monitoring...

247

Methodology for Assesment of Urban Water Planning Objectives  

E-Print Network [OSTI]

TR-51 1973 Methodology for Assessment of Urban Water Planning Objectives W.L. Meier B.M. Thornton Texas Water Resources Institute Texas A&M University ...

Meier, W. L.; Thornton, B. M.

248

Energy Efficiency Standards for Refrigerators in Brazil: A Methodology...  

Open Energy Info (EERE)

for Impact Evaluation Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Efficiency Standards for Refrigerators in Brazil: A Methodology for Impact Evaluation Focus...

249

Methodology for Estimating Reductions of GHG Emissions from Mosaic...  

Open Energy Info (EERE)

Methodology for Estimating Reductions of GHG Emissions from Mosaic Deforestation AgencyCompany Organization: World Bank Sector: Land Focus Area: Forestry Topics: Co-benefits...

250

National Academies Criticality Methodology and Assessment Video (Text Version)  

Broader source: Energy.gov [DOE]

This is a text version of the "National Academies Criticality Methodology and Assessment" video presented at the Critical Materials Workshop, held on April 3, 2012 in Arlington, Virginia.

251

Survey of Transmission Cost Allocation Methodologies for Regional Transmission Organizations  

SciTech Connect (OSTI)

The report presents transmission cost allocation methodologies for reliability transmission projects, generation interconnection, and economic transmission projects for all Regional Transmission Organizations.

Fink, S.; Porter, K.; Mudd, C.; Rogers, J.

2011-02-01T23:59:59.000Z

252

assessment committee methodology: Topics by E-print Network  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Shinozuka, Masanobu 49 The Future of Natural Gas Supplementary Paper SP2.1 Natural Gas Resource Assessment Methodologies CiteSeer Summary: Techniques for estimation of...

253

Methodology for Carbon Accounting of Grouped Mosaic and Landscape...  

Open Energy Info (EERE)

Grouped Mosaic and Landscape-scale REDD Projects Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Methodology for Carbon Accounting of Grouped Mosaic and Landscape-scale...

254

New Methodologies for Analysis of Premixed Charge Compression...  

Broader source: Energy.gov (indexed) [DOE]

New Methodologies for Analysis of Premixed Charge Compression Ignition Engines Salvador M. Aceves, Daniel L. Flowers, J. Ray Smith, Lee Davisson, Francisco Espinosa-Loza, Tim Ross,...

255

Microsphere estimates of blood flow: Methodological considerations  

SciTech Connect (OSTI)

The microsphere technique is a standard method for measuring blood flow in experimental animals. Sporadic reports have appeared outlining the limitations of this method. In this study the authors have systematically assessed the effect of blood withdrawals for reference sampling, microsphere numbers, and anesthesia on blood flow estimates using radioactive microspheres in dogs. Experiments were performed on 18 conscious and 12 anesthetized dogs. Four blood flow estimates were performed over 120 min using 1 {times} 10{sup 6} microspheres each time. The effects of excessive numbers of microspheres pentobarbital sodium anesthesia, and replacement of volume loss for reference samples with dextran 70 were assessed. In both conscious and anesthetized dogs a progressive decrease in gastric mucosal blood flow and cardiac output was observed over 120 min. This was also observed in the pancreas in conscious dogs. The major factor responsible for these changes was the volume loss due to the reference sample withdrawals. Replacement of the withdrawn blood with dextran 70 led to stable blood flows to all organs. The injection of excessive numbers of microspheres did not modify hemodynamics to a greater extent than did the injection of 4 million microspheres. Anesthesia exerted no influence on blood flow other than raising coronary flow. The authors conclude that although blood flow to the gastric mucosa and the pancreas is sensitive to the minor hemodynamic changes associated with the microsphere technique, replacement of volume loss for reference samples ensures stable blood flow to all organs over a 120-min period.

von Ritter, C.; Hinder, R.A.; Womack, W.; Bauerfeind, P.; Fimmel, C.J.; Kvietys, P.R.; Granger, D.N.; Blum, A.L. (Univ. of the Witwatersrand, Johannesburg (South Africa) Louisianna State Univ. Medical Center, Shreveport (USA) Universitaire Vaudois (Switzerland))

1988-02-01T23:59:59.000Z

256

E-Print Network 3.0 - abundance solid-state 33s Sample Search...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

by Explorit Topic List Advanced Search Sample search results for: abundance solid-state 33s Page: << < 1 2 3 4 5 > >> 1 Methodological developments of solid-state NMR and...

257

RAMS (Risk Analysis - Modular System) methodology  

SciTech Connect (OSTI)

The Risk Analysis - Modular System (RAMS) was developed to serve as a broad scope risk analysis tool for the Risk Assessment of the Hanford Mission (RAHM) studies. The RAHM element provides risk analysis support for Hanford Strategic Analysis and Mission Planning activities. The RAHM also provides risk analysis support for the Hanford 10-Year Plan development activities. The RAMS tool draws from a collection of specifically designed databases and modular risk analysis methodologies and models. RAMS is a flexible modular system that can be focused on targeted risk analysis needs. It is specifically designed to address risks associated with overall strategy, technical alternative, and `what if` questions regarding the Hanford cleanup mission. RAMS is set up to address both near-term and long-term risk issues. Consistency is very important for any comparative risk analysis, and RAMS is designed to efficiently and consistently compare risks and produce risk reduction estimates. There is a wide range of output information that can be generated by RAMS. These outputs can be detailed by individual contaminants, waste forms, transport pathways, exposure scenarios, individuals, populations, etc. However, they can also be in rolled-up form to support high-level strategy decisions.

Stenner, R.D.; Strenge, D.L.; Buck, J.W. [and others

1996-10-01T23:59:59.000Z

258

Error Monitoring: A Learning Strategy for Improving Academic Performance of LD Adolescents  

E-Print Network [OSTI]

Error monitoring, a learning strategy for detecting and correcting errors in written products, was taught to nine learning disabled adolescents. Students could detect and correct more errors after they received training ...

Schumaker, Jean B.; Deshler, Donald D.; Nolan, Susan; Clark, Frances L.; Alley, Gordon R.; Warner, Michael M.

1981-04-01T23:59:59.000Z

259

Assessing the Impact of Differential Genotyping Errors on Rare Variant Tests of Association  

E-Print Network [OSTI]

Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential ...

Fast, Shannon Marie

260

SHEAN (Simplified Human Error Analysis code) and automated THERP  

SciTech Connect (OSTI)

One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN.

Wilson, J.R.

1993-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

Design Methodology for Unmannded Aerial Vehicle (UAV) Team Coordination  

E-Print Network [OSTI]

1 Design Methodology for Unmannded Aerial Vehicle (UAV) Team Coordination F.B. da Silva S.D. Scott-mail: halab@mit.edu #12;2 Design Methodology for Unmannded Aerial Vehicle (UAV) Team Coordination by F.B. da Silva, S.D. Scott, and M.L. Cummings Executive Summary Unmanned Aerial Vehicle (UAV) systems, despite

Cummings, Mary "Missy"

262

ORNL/TM-2008/105 Cost Methodology for Biomass  

E-Print Network [OSTI]

ORNL/TM-2008/105 Cost Methodology for Biomass Feedstocks: Herbaceous Crops and Agricultural Resource and Engineering Systems Environmental Sciences Division COST METHODOLOGY FOR BIOMASS FEESTOCKS ....................................................................................................... 3 2.1.1 Integrated Biomass Supply Analysis and Logistics Model (IBSAL).......................... 6 2

Pennycook, Steve

263

A Methodology for the Derivation of Parallel Programs  

E-Print Network [OSTI]

A Methodology for the Derivation of Parallel Programs Joy Goodman Department of Computer Science, University of Glasgow Abstract. I am currently developing a methodology for deriving paral­ lel programs from equational reasoning, a more efficient parallel program in a variety of languages and styles can be derived

Goodman, Joy

264

A New Methodology for Aircraft HVDC Power Systems design  

E-Print Network [OSTI]

A New Methodology for Aircraft HVDC Power Systems design D. Hernández, M. Sautreuil, N. Retière, D-mail: olivier.sename@gipsa-lab.inpg.fr Abstract ­ A new methodology for aircraft HVDC power systems design

Paris-Sud XI, Université de

265

Frontier efficiency methodologies to measure performance in the insurance industry  

E-Print Network [OSTI]

Frontier efficiency methodologies to measure performance in the insurance industry: Overview¨ur Mathematik und Wirtschaftswissenschaften UNIVERSIT¨AT ULM #12;Frontier efficiency methodologies to measure The purpose of this article is to provide an overview on frontier efficiency measurement in the insurance

Ulm, Universität

266

A Case Study applying Process and Project Alignment Methodology  

E-Print Network [OSTI]

A Case Study applying Process and Project Alignment Methodology Paula Ventura Martins1 & Alberto process and (2) analyze projects, starting an SPI effort. In order to evaluate ProPAM, a study case Martins A Case Study Applying & Alberto Rodrigues da Silva Process and Project Alignment Methodology 64

da Silva, Alberto Rodrigues

267

Web Based Simulations for Virtual Scientific Experiment: Methodology and Tools  

E-Print Network [OSTI]

. These are the keywords. Web based simulation, Virtual Scientific Experiment, e-learning 1. INTRODUCTION Until now Technology for Enhanced Learning 1 #12;Web Based Simulations for Virtual Scientific Experiment: MethodologyWeb Based Simulations for Virtual Scientific Experiment: Methodology and Tools Giovannina Albano

Paris-Sud XI, Université de

268

PDF Approach Hybrid Methodology Validation DEVELOPMENT OF A HYBRID  

E-Print Network [OSTI]

PDF Approach Hybrid Methodology Validation DEVELOPMENT OF A HYBRID EULERIAN-LAGRANGIAN METHOD CNRS / INPT / UPS PhD Defense X. PIALAT Hybrid Eulerian-Lagrangian Method (HELM) #12;PDF Approach Hybrid Methodology Validation Introduction Gas-Particle Flows Applications pollutant dispersion

Paris-Sud XI, Université de

269

Determining mutant spectra of three RNA viral samples using ultra-deep sequencing  

SciTech Connect (OSTI)

RNA viruses have extremely high mutation rates that enable the virus to adapt to new host environments and even jump from one species to another. As part of a viral transmission study, three viral samples collected from naturally infected animals were sequenced using Illumina paired-end technology at ultra-deep coverage. In order to determine the mutant spectra within the viral quasispecies, it is critical to understand the sequencing error rates and control for false positive calls of viral variants (point mutantations). I will estimate the sequencing error rate from two control sequences and characterize the mutant spectra in the natural samples with this error rate.

Chen, H

2012-06-06T23:59:59.000Z

270

Probabilistic fatigue methodology and wind turbine reliability  

SciTech Connect (OSTI)

Wind turbines subjected to highly irregular loadings due to wind, gravity, and gyroscopic effects are especially vulnerable to fatigue damage. The objective of this study is to develop and illustrate methods for the probabilistic analysis and design of fatigue-sensitive wind turbine components. A computer program (CYCLES) that estimates fatigue reliability of structural and mechanical components has been developed. A FORM/SORM analysis is used to compute failure probabilities and importance factors of the random variables. The limit state equation includes uncertainty in environmental loading, gross structural response, and local fatigue properties. Several techniques are shown to better study fatigue loads data. Common one-parameter models, such as the Rayleigh and exponential models are shown to produce dramatically different estimates of load distributions and fatigue damage. Improved fits may be achieved with the two-parameter Weibull model. High b values require better modeling of relatively large stress ranges; this is effectively done by matching at least two moments (Weibull) and better by matching still higher moments. For this purpose, a new, four-moment {open_quotes}generalized Weibull{close_quotes} model is introduced. Load and resistance factor design (LRFD) methodology for design against fatigue is proposed and demonstrated using data from two horizontal-axis wind turbines. To estimate fatigue damage, wind turbine blade loads have been represented by their first three statistical moments across a range of wind conditions. Based on the moments {mu}{sub 1}{hor_ellipsis}{mu}{sub 3}, new {open_quotes}quadratic Weibull{close_quotes} load distribution models are introduced. The fatigue reliability is found to be notably affected by the choice of load distribution model.

Lange, C.H. [Stanford Univ., CA (United States)

1996-05-01T23:59:59.000Z

271

Methodology for Scaling Fusion Power Plant Availability  

SciTech Connect (OSTI)

Normally in the U.S. fusion power plant conceptual design studies, the development of the plant availability and the plant capital and operating costs makes the implicit assumption that the plant is a 10th of a kind fusion power plant. This is in keeping with the DOE guidelines published in the 1970s, the PNL report1, "Fusion Reactor Design Studies - Standard Accounts for Cost Estimates. This assumption specifically defines the level of the industry and technology maturity and eliminates the need to define the necessary research and development efforts and costs to construct a one of a kind or the first of a kind power plant. It also assumes all the "teething" problems have been solved and the plant can operate in the manner intended. The plant availability analysis assumes all maintenance actions have been refined and optimized by the operation of the prior nine or so plants. The actions are defined to be as quick and efficient as possible. This study will present a methodology to enable estimation of the availability of the one of a kind (one OAK) plant or first of a kind (1st OAK) plant. To clarify, one of the OAK facilities might be the pilot plant or the demo plant that is prototypical of the next generation power plant, but it is not a full-scale fusion power plant with all fully validated "mature" subsystems. The first OAK facility is truly the first commercial plant of a common design that represents the next generation plant design. However, its subsystems, maintenance equipment and procedures will continue to be refined to achieve the goals for the 10th OAK power plant.

Lester M. Waganer

2011-01-04T23:59:59.000Z

272

T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote...  

Broader source: Energy.gov (indexed) [DOE]

719:Apache modproxyajp HTTP Processing Error Lets Remote Users Deny Service T-719:Apache modproxyajp HTTP Processing Error Lets Remote Users Deny Service September 16, 2011 -...

273

Heralded quantum gates with integrated error detection in optical cavitites  

E-Print Network [OSTI]

We propose and analyze heralded quantum gates between qubits in optical cavities. They employ an auxiliary qubit to report if a successful gate occurred. In this manner, the errors, which would have corrupted a deterministic gate, are converted into a non-unity probability of success: once successful the gate has a much higher fidelity than a similar deterministic gate. Specifically, we describe that a heralded , near-deterministic controlled phase gate (CZ-gate) with the conditional error arbitrarily close to zero and the success probability that approaches unity as the cooperativity of the system, C, becomes large. Furthermore, we describe an extension to near-deterministic N- qubit Toffoli gate with a favorable error scaling. These gates can be directly employed in quantum repeater networks to facilitate near-ideal entanglement swapping, thus greatly speeding up the entanglement distribution.

J. Borregaard; P. Kómár; E. M. Kessler; A. S. Sřrensen; M. D. Lukin

2015-01-05T23:59:59.000Z

274

Development of an integrated system for estimating human error probabilities  

SciTech Connect (OSTI)

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

1998-12-01T23:59:59.000Z

275

Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU  

E-Print Network [OSTI]

Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...

Haque, Imran S

2009-01-01T23:59:59.000Z

276

Full protection of superconducting qubit systems from coupling errors  

E-Print Network [OSTI]

Solid state qubits realized in superconducting circuits are potentially extremely scalable. However, strong decoherence may be transferred to the qubits by various elements of the circuits that couple individual qubits, particularly when coupling is implemented over long distances. We propose here an encoding that provides full protection against errors originating from these coupling elements, for a chain of superconducting qubits with a nearest neighbor anisotropic XY-interaction. The encoding is also seen to provide partial protection against errors deriving from general electronic noise.

M. J. Storcz; J. Vala; K. R. Brown; J. Kempe; F. K. Wilhelm; K. B. Whaley

2005-08-09T23:59:59.000Z

277

Error estimates and specification parameters for functional renormalization  

SciTech Connect (OSTI)

We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)

2013-07-15T23:59:59.000Z

278

JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned  

SciTech Connect (OSTI)

Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.

Frank Marhauser

2011-09-01T23:59:59.000Z

279

Fitting Pulsar Wind Tori. II. Error Analysis and Applications  

E-Print Network [OSTI]

We have applied the torus fitting procedure described in Ng & Romani (2004) to PWNe observations in the Chandra data archive. This study provides quantitative measurement of the PWN geometry and we characterize the uncertainties in the fits, with statistical errors coming from the fit uncertainties and systematic errors estimated by varying the assumed fitting model. The symmetry axis $\\Psi$ of the PWN are generally well determined, and highly model-independent. We often derive a robust value for the spin inclination $\\zeta$. We briefly discuss the utility of these results in comparison with new radio and high energy pulse measurements

C. -Y. Ng; Roger W. Romani

2007-10-23T23:59:59.000Z

280

Laser Phase Errors in Seeded Free Electron Lasers  

SciTech Connect (OSTI)

Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

2012-04-17T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

Fitting Pulsar Wind Tori. II. Error Analysis and Applications  

E-Print Network [OSTI]

We have applied the torus fitting procedure described in Ng & Romani (2004) to PWNe observations in the Chandra data archive. This study provides quantitative measurement of the PWN geometry and we characterize the uncertainties in the fits, with statistical errors coming from the fit uncertainties and systematic errors estimated by varying the assumed fitting model. The symmetry axis $\\Psi$ of the PWN are generally well determined, and highly model-independent. We often derive a robust value for the spin inclination $\\zeta$. We briefly discuss the utility of these results in comparison with new radio and high energy pulse measurements

Ng, C -Y

2007-01-01T23:59:59.000Z

282

Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds  

SciTech Connect (OSTI)

Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ?}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

Guth, Larry, E-mail: lguth@math.mit.edu [Department of Mathematics, MIT, Cambridge, Massachusetts 02139 (United States); Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il [Institute of Mathematics, Hebrew University, Jerusalem 91904 (Israel)

2014-08-15T23:59:59.000Z

283

Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications  

SciTech Connect (OSTI)

The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.

Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

2013-08-25T23:59:59.000Z

284

Considering Workload Input Variations in Error Coverage Estimation  

E-Print Network [OSTI]

different parts of the workload code to be executed different number of times. By using the results from in the workload input when estimating error detection coverage using fault injection are investigated. Results sequence based on results from fault injection experiments with another input sequence is presented

Karlsson, Johan

285

Stateful Testing: Finding More Errors in Code and Contracts  

E-Print Network [OSTI]

. The generated test cases are designed to violate the dynamically inferred contracts (invariants) characterizing the existing test suite. As a consequence, they are in a good position to detect new faults, and alsoStateful Testing: Finding More Errors in Code and Contracts Yi Wei · Hannes Roth · Carlo A. Furia

Meyer, Bertrand

286

Error magnitude in the conservation of energy in the  

E-Print Network [OSTI]

Appendix A Error magnitude in the conservation of energy in the approximate melt segregation scheme A.1 Conservation of energy The approximate melt segregation used in the thermochemical convection models of chap- ters 6 and 7 has an impact on the conservation of energy, because although 'segregated

van Thienen, Peter

287

The contour method cutting assumption: error minimization and correction  

SciTech Connect (OSTI)

The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

2010-01-01T23:59:59.000Z

288

MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS  

E-Print Network [OSTI]

MULTI­TARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN # Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coe subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic computations like compressible

Hartmann, Ralf

289

Error estimation and adaptive mesh refinement for aerodynamic flows  

E-Print Network [OSTI]

Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held-oriented mesh refinement for single and multiple aerodynamic force coefficients as well as residual-based mesh refinement applied to various three-dimensional lam- inar and turbulent aerodynamic test cases defined

Hartmann, Ralf

290

MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS  

E-Print Network [OSTI]

MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coefficients including Navier-Stokes equations AMS subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic

Hartmann, Ralf

291

Analysis of possible systematic errors in the Oslo method  

SciTech Connect (OSTI)

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K. [Department of Physics, University of Oslo, N-0316 Oslo (Norway); Krticka, M. [Institute of Particle and Nuclear Physics, Charles University, Prague (Czech Republic); Betak, E. [Institute of Physics SAS, 84511 Bratislava (Slovakia); Faculty of Philosophy and Science, Silesian University, 74601 Opava (Czech Republic); Schiller, A.; Voinov, A. V. [Department of Physics and Astronomy, Ohio University, Athens, Ohio 45701 (United States)

2011-03-15T23:59:59.000Z

292

Multilayer Perceptron Error Surfaces: Visualization, Structure and Modelling  

E-Print Network [OSTI]

. This is commonly formulated as a multivariate non­linear optimization problem over a very high­dimensional space of analysis are not well­suited to this problem. Visualizing and describ­ ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method

Gallagher, Marcus

293

Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling  

E-Print Network [OSTI]

. This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of analysis are not well-suited to this problem. Visualizing and describ- ing the error surface are also three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method

Gallagher, Marcus

294

Analysis of possible systematic errors in the Oslo method  

E-Print Network [OSTI]

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of level density and gamma-ray transmission coefficient from a set of particle-gamma coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

A. C. Larsen; M. Guttormsen; M. Krticka; E. Betak; A. Bürger; A. Görgen; H. T. Nyhus; J. Rekstad; A. Schiller; S. Siem; H. K. Toft; G. M. Tveten; A. V. Voinov; K. Wikan

2012-11-27T23:59:59.000Z

295

Error Control Based Model Reduction for Parameter Optimization of Elliptic  

E-Print Network [OSTI]

of technical devices that rely on multiscale processes, such as fuel cells or batteries. As the solutionError Control Based Model Reduction for Parameter Optimization of Elliptic Homogenization Problems optimization of elliptic multiscale problems with macroscopic optimization functionals and microscopic material

296

Achievable Error Exponents for the Private Fingerprinting Game  

E-Print Network [OSTI]

Achievable Error Exponents for the Private Fingerprinting Game Anelia Somekh-Baruch and Neri Merhav a forgery of the data while aiming at erasing the fingerprints in order not to be detected. Their action have presented and analyzed a game-theoretic model of private2 fingerprinting systems in the presence

Merhav, Neri

297

Error suppression in Hamiltonian based quantum computation using energy penalties  

E-Print Network [OSTI]

We consider the use of quantum error detecting codes, together with energy penalties against leaving the codespace, as a method for suppressing environmentally induced errors in Hamiltonian based quantum computation. This method was introduced in [1] in the context of quantum adiabatic computation, but we consider it more generally. Specifically, we consider a computational Hamiltonian, which has been encoded using the logical qubits of a single-qubit error detecting code, coupled to an environment of qubits by interaction terms that act one-locally on the system. Energy penalty terms are added that penalize states outside of the codespace. We prove that in the limit of infinitely large penalties, one-local errors are completely suppressed, and we derive some bounds for the finite penalty case. Our proof technique involves exact integration of the Schrodinger equation, making no use of master equations or their assumptions. We perform long time numerical simulations on a small (one logical qubit) computational system coupled to an environment and the results suggest that the energy penalty method achieves even greater protection than our bounds indicate.

Adam D. Bookatz; Edward Farhi; Leo Zhou

2014-07-06T23:59:59.000Z

298

RESOLVE Upgrades for on Line Lattice Error Analysis  

SciTech Connect (OSTI)

We have increased the speed and versatility of the orbit analysis process by adding a command file, or 'script' language, to RESOLVE. This command file feature enables us to automate data analysis procedures to detect lattice errors. We describe the RESOLVE command file and present examples of practical applications.

Lee, M.; Corbett, J.; White, G.; /SLAC; Zambre, Y.; /Unlisted

2011-08-25T23:59:59.000Z

299

Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration  

E-Print Network [OSTI]

This paper addresses the problem of rejecting interfer- ence due to secondary specular reflections, cross structure, acquisition delay, lack of error recovery, and incorrect modelling of measurement noise. We cause secondary reflections, edges and textures may have a stripe-like appearance, and cross-talk can

300

Effects of errors in the solar radius on helioseismic inferences  

E-Print Network [OSTI]

Frequencies of intermediate-degree f-modes of the Sun seem to indicate that the solar radius is smaller than what is normally used in constructing solar models. We investigate the possible consequences of an error in radius on results for solar structure obtained using helioseismic inversions. It is shown that solar sound speed will be overestimated if oscillation frequencies are inverted using reference models with a larger radius. Using solar models with radius of 695.78 Mm and new data sets, the base of the solar convection zone is estimated to be at radial distance of $0.7135\\pm 0.0005$ of the solar radius. The helium abundance in the convection zone as determined using models with OPAL equation of state is $0.248\\pm 0.001$, where the errors reflect the estimated systematic errors in the calculation, the statistical errors being much smaller. Assuming that the OPAL opacities used in the construction of the solar models are correct, the surface $Z/X$ is estimated to be $0.0245\\pm 0.0006$.

Sarbani Basu

1997-12-09T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

Error field and magnetic diagnostic modeling for W7-X  

SciTech Connect (OSTI)

The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (? = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

2014-07-01T23:59:59.000Z

302

Designing Automation to Reduce Operator Errors Nancy G. Leveson  

E-Print Network [OSTI]

Designing Automation to Reduce Operator Errors Nancy G. Leveson Computer Science and Engineering University of Washington Everett Palmer NASA Ames Research Center Introduction Advanced automation has been of mode­related problems [SW95]. After studying accidents and incidents in the new, highly automated

Leveson, Nancy

303

MODELS FOR DIAGNOSING ROBOT ERROR SOURCES Louis J. Everett  

E-Print Network [OSTI]

of industrial robots and on some machine tools, calibration methods have significantly improved positionMODELS FOR DIAGNOSING ROBOT ERROR SOURCES Louis J. Everett Mechanical Engineering Texas A that the somewhat ad-hoc modelling methods used for robot calibration, although satisfactory for improving accu

Everett, Louis J.

304

Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola  

E-Print Network [OSTI]

Removing Systematic Errors from Rotating Shadowband Pyranometer Data Frank Vignola Solar Radiation irradiance be- cause they do not require manual adjustment of trackers. However, a RSP requires the use of solar cell based pyranometers which underestimate diffuse irradiance by 20- 30% under clear sky

Oregon, University of

305

Two infinite families of nonadditive quantum error-correcting codes  

E-Print Network [OSTI]

We construct explicitly two infinite families of genuine nonadditive 1-error correcting quantum codes and prove that their coding subspaces are 50% larger than those of the optimal stabilizer codes of the same parameters via the linear programming bound. All these nonadditive codes can be characterized by a stabilizer-like structure and thus their encoding circuits can be designed in a straightforward manner.

Sixia Yu; Qing Chen; C. H. Oh

2009-01-14T23:59:59.000Z

306

Threshold error rates for the toric and surface codes  

E-Print Network [OSTI]

The surface code scheme for quantum computation features a 2d array of nearest-neighbor coupled qubits yet claims a threshold error rate approaching 1% (NJoP 9:199, 2007). This result was obtained for the toric code, from which the surface code is derived, and surpasses all other known codes restricted to 2d nearest-neighbor architectures by several orders of magnitude. We describe in detail an error correction procedure for the toric and surface codes, which is based on polynomial-time graph matching techniques and is efficiently implementable as the classical feed-forward processing step in a real quantum computer. By direct simulation of this error correction scheme, we determine the threshold error rates for the two codes (differing only in their boundary conditions) for both ideal and non-ideal syndrome extraction scenarios. We verify that the toric code has an asymptotic threshold of p = 15.5% under ideal syndrome extraction, and p = 7.8 10^-3 for the non-ideal case, in agreement with prior work. Simulations of the surface code indicate that the threshold is close to that of the toric code.

D. S. Wang; A. G. Fowler; A. M. Stephens; L. C. L. Hollenberg

2009-05-05T23:59:59.000Z

307

Quantum Error Correction of Continuous Variable States against Gaussian Noise  

E-Print Network [OSTI]

We describe a continuous variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

T. C. Ralph

2011-05-22T23:59:59.000Z

308

Error Control of Iterative Linear Solvers for Integrated Groundwater Models  

E-Print Network [OSTI]

and presentation of GMRES performance benchmarking results. Introduction As the groundwater model infrastructureError Control of Iterative Linear Solvers for Integrated Groundwater Models by Matthew F. Dixon1 for integrated groundwater models, which are implicitly coupled to another model, such as surface water models

Bai, Zhaojun

309

Radiochemical Analysis Methodology for uranium Depletion Measurements  

SciTech Connect (OSTI)

This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

Scatena-Wachel DE

2007-01-09T23:59:59.000Z

310

A-posteriori estimation and adaptive control of the error in the solution quantity of interest  

E-Print Network [OSTI]

-posteriori estimation and adaptive control of the error in the quantity of interest. The major tool for the estimation of the error in the desired quantity is the splitting of the error into two components: the near-field or local error, and the far-field or pollution...

Datta, Dibyendu Kumar, Dd 1973-

1997-01-01T23:59:59.000Z

311

Quantum computing with nearest neighbor interactions and error rates over 1%  

E-Print Network [OSTI]

Large-scale quantum computation will only be achieved if experimentally implementable quantum error correction procedures are devised that can tolerate experimentally achievable error rates. We describe a quantum error correction procedure that requires only a 2-D square lattice of qubits that can interact with their nearest neighbors, yet can tolerate quantum gate error rates over 1%. The precise maximum tolerable error rate depends on the error model, and we calculate values in the range 1.1--1.4% for various physically reasonable models. Even the lowest value represents the highest threshold error rate calculated to date in a geometrically constrained setting, and a 50% improvement over the previous record.

David S. Wang; Austin G. Fowler; Lloyd C. L. Hollenberg

2010-09-20T23:59:59.000Z

312

A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN  

SciTech Connect (OSTI)

The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple mean of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.

Shine, G.

2014-11-25T23:59:59.000Z

313

Spent fuel management fee methodology and computer code user's manual.  

SciTech Connect (OSTI)

The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

Engel, R.L.; White, M.K.

1982-01-01T23:59:59.000Z

314

A Quasi-Dynamic HVAC and Building Simulation Methodology  

E-Print Network [OSTI]

to their design and simulated in a computationally efficient manner. The methodology represents a system as interconnected, object-oriented sub-models known as components. Fluids and their local properties are modeled using discrete, incompressible objects known...

Davis, Clinton Paul

2012-07-16T23:59:59.000Z

315

A Methodology to Measure Retrofit Energy Savings in Commercial Buildings  

E-Print Network [OSTI]

. This dissertation develops a methodology to measure retrofit energy savings and the uncertainty of the savings in commercial buildings. The functional forms of empirical models of cooling and heating energy use in commercial buildings are derived from an engineering...

Kissock, John Kelly

2008-01-16T23:59:59.000Z

316

Average System Cost Methodology : Administrator's Record of Decision.  

SciTech Connect (OSTI)

Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

United States. Bonneville Power Administration.

1984-06-01T23:59:59.000Z

317

AIAA 2001-1535 A SYMBOLIC METHODOLOGY FOR THE  

E-Print Network [OSTI]

, wind turbines, etc. Over the last decade the advent of composites and the pursuit to build lighter is applied to a Horizontal-Axis Wind Turbine. The pa- per presents a new methodology for modeling

Patil, Mayuresh

318

A methodology to assess cost implications of automotive customization  

E-Print Network [OSTI]

This thesis focuses on determining the cost of customization for different components or groups of components of a car. It offers a methodology to estimate the manufacturing cost of a complex system such as a car. This ...

Fournier, Laëtitia

2005-01-01T23:59:59.000Z

319

Architectural Approaches, Concepts and Methodologies of Service Oriented Architecture  

E-Print Network [OSTI]

Architectural Approaches, Concepts and Methodologies of Service Oriented Architecture Master Thesis. Introduction to Service Oriented Architecture........................................... 2.1 Evolution Middleware (MOM)........................................... 2.5 Definition of Service Oriented Architecture

Moeller, Ralf

320

Good Experimental Methodologies and Simulation in Autonomous Mobile Robotics  

E-Print Network [OSTI]

Good Experimental Methodologies and Simulation in Autonomous Mobile Robotics Francesco Amigoni and Viola Schiaffonati Artificial Intelligence and Robotics Laboratory, Dipartimento di Elettronica e to characterize analytically, as it is often the case in autonomous mobile robotics. Although their importance

Amigoni, Francesco

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

Reservoir characterization using experimental design and response surface methodology  

E-Print Network [OSTI]

This research combines a statistical tool called experimental design/response surface methodology with reservoir modeling and flow simulation for the purpose of reservoir characterization. Very often, it requires large number of reservoir simulation...

Parikh, Harshal

2004-09-30T23:59:59.000Z

322

Hanford Site baseline risk assessment methodology. Revision 2  

SciTech Connect (OSTI)

This methodology has been developed to prepare human health and environmental evaluations of risk as part of the Comprehensive Environmental Response, Compensation, and Liability Act remedial investigations (RIs) and the Resource Conservation and Recovery Act facility investigations (FIs) performed at the Hanford Site pursuant to the Hanford Federal Facility Agreement and Consent Order referred to as the Tri-Party Agreement. Development of the methodology has been undertaken so that Hanford Site risk assessments are consistent with current regulations and guidance, while providing direction on flexible, ambiguous, or undefined aspects of the guidance. The methodology identifies Site-specific risk assessment considerations and integrates them with approaches for evaluating human and environmental risk that can be factored into the risk assessment program supporting the Hanford Site cleanup mission. Consequently, the methodology will enhance the preparation and review of individual risk assessments at the Hanford Site.

Not Available

1993-03-01T23:59:59.000Z

323

DOE 2009 Geothermal Risk Analysis: Methodology and Results (Presentation)  

SciTech Connect (OSTI)

This presentation summarizes the methodology and results for a probabilistic risk analysis of research, development, and demonstration work-primarily for enhanced geothermal systems (EGS)-sponsored by the U.S. Department of Energy Geothermal Technologies Program.

Young, K. R.; Augustine, C.; Anderson, A.

2010-02-01T23:59:59.000Z

324

Software Interoperability Tools: Standardized Capability-Profiling Methodology ISO16100  

E-Print Network [OSTI]

Software Interoperability Tools: Standardized Capability-Profiling Methodology ISO16100 Michiko, qwang@seu.ac.jp Abstract. The ISO 16100 series has been developed for Manufacturing software for developing general software applications including enterprise applications. In this paper, ISO 16100

Paris-Sud XI, Université de

325

Transmission Cost Allocation Methodologies for Regional Transmission Organizations  

SciTech Connect (OSTI)

This report describes transmission cost allocation methodologies for transmission projects developed to maintain or enhance reliability, to interconnect new generators, or to access new resources and enhance competitive bulk power markets, otherwise known as economic transmission projects.

Fink, S.; Rogers, J.; Porter, K.

2010-07-01T23:59:59.000Z

326

Hydrogen Goal-Setting Methodologies Report to Congress  

Fuel Cell Technologies Publication and Product Library (EERE)

DOE's Hydrogen Goal-Setting Methodologies Report to Congress summarizes the processes used to set Hydrogen Program goals and milestones. Published in August 2006, it fulfills the requirement under se

327

Protein MAS NMR methodology and structural analysis of protein assemblies  

E-Print Network [OSTI]

Methodological developments and applications of solid-state magic-angle spinning nuclear magnetic resonance (MAS NMR) spectroscopy, with particular emphasis on the analysis of protein structure, are described in this thesis. ...

Bayro, Marvin J

2010-01-01T23:59:59.000Z

328

Towards a Pan-European property index : methodological opportunities  

E-Print Network [OSTI]

This study examines the methodological opportunities of index construction for the Pan-European property index, whose release is planned by the company Investment Property Databank (IPD). To address the question of temporal ...

Helfer, Friederike, 1976-

2004-01-01T23:59:59.000Z

329

Proposed Methodology for LEED Baseline Refrigeration Modeling (Presentation)  

SciTech Connect (OSTI)

This PowerPoint presentation summarizes a proposed methodology for LEED baseline refrigeration modeling. The presentation discusses why refrigeration modeling is important, the inputs of energy models, resources, reference building model cases, baseline model highlights, example savings calculations and results.

Deru, M.

2011-02-01T23:59:59.000Z

330

Combining Modeling Methodologies for Improved Understanding of Smart Material Characteristics  

E-Print Network [OSTI]

Combining Modeling Methodologies for Improved Understanding of Smart Material Characteristics Material Systems and Structures February 2, 1998 ABSTRACT Smart materials are complex materials performance capabilities but the synergistic response of the smart material and companion structure. Behavior

Lindner, Douglas K.

331

A Methodology for Automated Verification of Rosetta Specification Transformations  

E-Print Network [OSTI]

particular semantic vocabulary and modeling style. The following dissertation proposes a framework, semantics and methodology for automated verification of safety preservation over specification transformations between domains. Utilizing the ideas of lattice...

Lohoefener, Jennifer Lee

2011-04-11T23:59:59.000Z

332

ESPC IDIQ Contract Sample  

Broader source: Energy.gov [DOE]

Document displays a sample indefinite delivery, indefinite quantity (IDIQ) energy savings performance contract (ESPC).

333

Economic Methodology for South Texas Irrigation Projects - RGIDECON  

E-Print Network [OSTI]

Methodology October 31, 2002 page 10 of 28 free component for time preference, a risk premium, and an inflation premium3 (Rister et al. 1999). The relationship between these three components is considered multiplicative (Leatham; Hamilton), i.e., the overall...TR-203 October 2002 Economic Methodology for South Texas Irrigation Projects – RGIDECON© M. Edward Rister Ronald D. Lacewell John R. C. Robinson John R. Ellis Allen W. Sturdivant Department of Agricultural Economics Texas Agricultural Experiment...

Ellis, John R.; Robinson, John R.C.; Sturdivant, Allen W.; Lacewell, Ronald D.; Rister, M. Edward

334

A methodological approach to the complexity measurement of software designs  

E-Print Network [OSTI]

A METHODOLOGICAL APPROACH TO THE COMPLEXITY MEASUREMENT OF SOFTWARE DESIGNS A Thesis by CLAY EDWIN WILLIAMS Submitted to the Office of Graduate Studies of Texas AkM University in partial fulffilment of the requirements for the degree... of MASTER OF SCIENCE December 1990 Major Subject: Computer Science A METHODOLOGICAL APPROACH TO THE COMPLEXITY MEASUREMENT OF SOFTWARE DESIGNS A Thesis by CLAY EDWIN WILLIAMS Approved as to style and content by: Willi m M. L' (Co-Chair of C ittee...

Williams, Clay Edwin

1990-01-01T23:59:59.000Z

335

Economic and Financial Methodology for South Texas Irrigation Projects – RGIDECON©  

E-Print Network [OSTI]

agencies; Debbie Helstrom, Jeff Walker, and Nick Palacios. These engineers with the Texas Water Development Board (TWDB) have provided valuable feedback on the methodology and data as well as insights on accommodating the requirements... and Financial Methodology August 2009 page 18 of 29 Helstrom, Debbie. Project Engineer, Texas Water Development Board, Austin, TX. Personal communications, Spring - Summer 2002. Infoplease.com. "Conversion Factors." ? 2002 Family Education Network. http...

Rister, M. Edward; Rogers, Callie S.; Lacewell, Ronald; Robinson, John; Ellis, John; Sturdivant, Allen

336

A methodology of mathematical models with an application  

E-Print Network [OSTI]

A METHODOLOGY OF MATHEMATICAL MODELS WITH AN APPLICATION A Thesis by RICHARD BRIAN WOOD Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE December 1972... Major Subject: Mathematics A METHODOLOGY OF MATHEMATICAL MODELS WITH AN APPLICATION A Thesis by RICHARD BRIAN WOOD Approved as to style and content by: (Chairman of Committee) (Head of Department) (Member) (Member) December 1972 ABSTRACT A...

Wood, Richard Brian

2012-06-07T23:59:59.000Z

337

Comparison of Wind Power and Load Forecasting Error Distributions: Preprint  

SciTech Connect (OSTI)

The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.

Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.

2012-07-01T23:59:59.000Z

338

Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications  

E-Print Network [OSTI]

) as pW,Y,Z(w, y, z; ?, ?1, ?2, ?3) which equals ? pW |X,Z(w|x, z)?1(x, z)?2{y ? m(x, z; ?), x, z}?3(z)dx, (2.1) where ? is the finite p-dimensional parameter of interest, ?1(x, z) ? pX|Z(x|z), ?2(#15;, x, z) ? p#15;|X,Z(#15;|x, z), and ?3(z) ? pZ(z...) are infinite dimensional nuisance parameters. Doing so, we see that pW,Y,Z , the RMM with measurement error, is tightly linked to the RMM without measurement error with probability density expressed as pX,Y,Z ? ?1(x, z)?2{y ? m(x, z; ?), x, z}?3(z). The lack...

Garcia, Tanya

2012-10-19T23:59:59.000Z

339

Method and system for reducing errors in vehicle weighing systems  

DOE Patents [OSTI]

A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.

Hively, Lee M. (Philadelphia, TN); Abercrombie, Robert K. (Knoxville, TN)

2010-08-24T23:59:59.000Z

340

Error message recording and reporting in the SLC control system  

SciTech Connect (OSTI)

Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.

Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

1985-10-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

Error message recording and reporting in the SLC control system  

SciTech Connect (OSTI)

Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed.

Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

1985-04-01T23:59:59.000Z

342

Topics in measurement error and missing data problems  

E-Print Network [OSTI]

reasons. In this research, the impact of missing genotypes is investigated for high resolution combined linkage and association mapping of quantitative trait loci (QTL). We assume that the genotype data are missing completely at random (MCAR). Two... and asymptotic properties. In the genetics study, a new method is proposed to account for the missing genotype in a combined linkage and association study. We have concluded that this method does not improve power but it will provide better type I error rates...

Liu, Lian

2009-05-15T23:59:59.000Z

343

Magnetic error analysis of recycler pbar injection transfer line  

SciTech Connect (OSTI)

Detailed study of Fermilab Recycler Ring anti-proton injection line became feasible with its BPM system upgrade, though the beamline has been in existence and operational since year 2000. Previous attempts were not fruitful due to limitations in the BPM system. Among the objectives are the assessment of beamline optics and the presence of error fields. In particular the field region of the permanent Lambertson magnets at both ends of R22 transfer line will be scrutinized.

Yang, M.J.; /Fermilab

2007-06-01T23:59:59.000Z

344

Error rate and power dissipation in nano-logic devices  

E-Print Network [OSTI]

and content by: Laszlo B. Kish (Chair of Commitpte) Ed anc ez-Sinencio (Member) song g (Member) William Marlow (Member) nan Singh (Head of Department) May 2004 Major Subject: Electrical Engineering 111 ABSTRACT Error Rate and Power Dissipation... in Nano-Logic Devices. (May 2004) Jong Un Kim, B. S. ; M. S. ; Ph. D. , Seoul National University Chair of Advisory Committee: Dr. Laszlo B. Kish Current-controlled logic and single electron logic processors have been investigated with respect...

Kim, Jong Un

2004-01-01T23:59:59.000Z

345

A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods  

SciTech Connect (OSTI)

In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

2014-08-01T23:59:59.000Z

346

Rain sampling device  

DOE Patents [OSTI]

The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

1991-05-14T23:59:59.000Z

347

Rain sampling device  

DOE Patents [OSTI]

The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

Nelson, Danny A. (Richland, WA); Tomich, Stanley D. (Richland, WA); Glover, Donald W. (Prosser, WA); Allen, Errol V. (Benton City, WA); Hales, Jeremy M. (Kennewick, WA); Dana, Marshall T. (Richland, WA)

1991-01-01T23:59:59.000Z

348

Runtime Detection of C-Style Errors in UPC Code  

SciTech Connect (OSTI)

Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

2011-09-29T23:59:59.000Z

349

Transuranic waste characterization sampling and analysis methods manual  

SciTech Connect (OSTI)

The Transuranic Waste Characterization Sampling and Analysis Methods Manual (Methods Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) for the Waste Isolation Pilot Plant (WIPP) Transuranic (TRU) Waste Characterization Program (the Program). This Methods Manual includes all of the testing, sampling, and analytical methodologies accepted by DOE for use in implementing the Program requirements specified in the QAPP.

NONE

1995-05-01T23:59:59.000Z

350

On the efficiency of nondegenerate quantum error correction codes for Pauli channels  

E-Print Network [OSTI]

We examine the efficiency of pure, nondegenerate quantum-error correction-codes for Pauli channels. Specifically, we investigate if correction of multiple errors in a block is more efficient than using a code that only corrects one error per block. Block coding with multiple-error correction cannot increase the efficiency when the qubit error-probability is below a certain value and the code size fixed. More surprisingly, existing multiple-error correction codes with a code length equal or less than 256 qubits have lower efficiency than the optimal single-error correcting codes for any value of the qubit error-probability. We also investigate how efficient various proposed nondegenerate single-error correcting codes are compared to the limit set by the code redundancy and by the necessary conditions for hypothetically existing nondegenerate codes. We find that existing codes are close to optimal.

Gunnar Bjork; Jonas Almlof; Isabel Sainz

2009-05-19T23:59:59.000Z

351

Clean Development Mechanism agricultural methodologies could help California to achieve AB 32 goals  

E-Print Network [OSTI]

agricultural methodologies could help California to achieveIts methodologies can help inform the implementation ofproject meth- odologies could help California realize its

Dinar, Ariel; Larson, Donald F; Frisbie, J. Aapris

2012-01-01T23:59:59.000Z

352

Shannon Entropy Based Time-Dependent Deterministic Sampling for Efficient "On-the-Fly" Quantum Dynamics  

E-Print Network [OSTI]

methodologies employed in gas- phase1 and condensed-phase chemical dynamics.2 When uti- lized, the BornShannon Entropy Based Time-Dependent Deterministic Sampling for Efficient "On-the-Fly" Quantum, United States Received October 14, 2010 Abstract: A new set of time-dependent deterministic sampling

Iyengar, Srinivasan S.

353

Type I error and power of the mean and covariance structure confirmatory factor analysis for differential item functioning detection: Methodological issues and resolutions  

E-Print Network [OSTI]

and latent constructs (i.e., trait, denoted by 𝜉𝜉). In the MACS model, the observed response 𝑥𝑥𝑖𝑖 to an item i (i = 1, …, p) is represented as a linear function of an intercept 𝜏𝜏𝑖𝑖 , latent trait variables 𝜉𝜉𝑖𝑖 (j = 1, …, m), and a unique... is a p × 1 vector of observed responses (in group g), 𝜏𝜏𝑔𝑔 is a p × 1 vector of intercepts, 𝜉𝜉𝑔𝑔 is an m × 1 vector of latent trait variables, ?𝑔𝑔 is a p × m matrix of factor loadings, and 𝛿𝛿𝑔𝑔 is a p × 1 vector of unique factor...

Lee, Jaehoon

2009-01-01T23:59:59.000Z

354

COMPUTER SCIENCE SAMPLE PROGRAM  

E-Print Network [OSTI]

COMPUTER SCIENCE SAMPLE PROGRAM (First Math Course MATH 198) This sample program suggests one way CS 181: Foundations of Computer Science II CS 180: Foundations of Computer Science I CS 191

Gering, Jon C.

355

Reliable random error estimation in the measurement of line-strength indices  

E-Print Network [OSTI]

We present a new set of accurate formulae for the computation of random errors in the measurement of atomic and molecular indices. The new expressions are in excellent agreement with numerical simulations. We have found that, in some cases, the use of approximated equations can give misleading line-strength index errors. It is important to note that accurate errors can only be achieved after a full control of the error propagation throughout the data reduction with a parallel processing of data and error frames. Finally, simple recipes for the estimation of the required signal-to-noise ratio to achieve a fixed index error are presented.

N. Cardiel; J. Gorgas; J. Cenarro; J. J. Gonzalez

1997-06-12T23:59:59.000Z

356

Transuranic waste characterization sampling and analysis plan  

SciTech Connect (OSTI)

Los Alamos National Laboratory (the Laboratory) is located approximately 25 miles northwest of Santa Fe, New Mexico, situated on the Pajarito Plateau. Technical Area 54 (TA-54), one of the Laboratory`s many technical areas, is a radioactive and hazardous waste management and disposal area located within the Laboratory`s boundaries. The purpose of this transuranic waste characterization, sampling, and analysis plan (CSAP) is to provide a methodology for identifying, characterizing, and sampling approximately 25,000 containers of transuranic waste stored at Pads 1, 2, and 4, Dome 48, and the Fiberglass Reinforced Plywood Box Dome at TA-54, Area G, of the Laboratory. Transuranic waste currently stored at Area G was generated primarily from research and development activities, processing and recovery operations, and decontamination and decommissioning projects. This document was created to facilitate compliance with several regulatory requirements and program drivers that are relevant to waste management at the Laboratory, including concerns of the New Mexico Environment Department.

NONE

1994-12-31T23:59:59.000Z

357

An Integrated Safety Assessment Methodology for Generation IV Nuclear Systems  

SciTech Connect (OSTI)

The Generation IV International Forum (GIF) Risk and Safety Working Group (RSWG) was created to develop an effective approach for the safety of Generation IV advanced nuclear energy systems. Early work of the RSWG focused on defining a safety philosophy founded on lessons learned from current and prior generations of nuclear technologies, and on identifying technology characteristics that may help achieve Generation IV safety goals. More recent RSWG work has focused on the definition of an integrated safety assessment methodology for evaluating the safety of Generation IV systems. The methodology, tentatively called ISAM, is an integrated “toolkit” consisting of analytical techniques that are available and matched to appropriate stages of Generation IV system concept development. The integrated methodology is intended to yield safety-related insights that help actively drive the evolving design throughout the technology development cycle, potentially resulting in enhanced safety, reduced costs, and shortened development time.

Timothy J. Leahy

2010-06-01T23:59:59.000Z

358

A Methodology for the Neutronics Design of Space Nuclear Reactors  

SciTech Connect (OSTI)

A methodology for the neutronics design of space power reactors is presented. This methodology involves balancing the competing requirements of having sufficient excess reactivity for the desired lifetime, keeping the reactor subcritical at launch and during submersion accidents, and providing sufficient control over the lifetime of the reactor. These requirements are addressed by three reactivity values for a given reactor design: the excess reactivity at beginning of mission, the negative reactivity at shutdown, and the negative reactivity margin in submersion accidents. These reactivity values define the control worth and the safety worth in submersion accidents, used for evaluating the merit of a proposed reactor type and design. The Heat Pipe-Segmented Thermoelectric Module Converters space reactor core design is evaluated and modified based on the proposed methodology. The final reactor core design has sufficient excess reactivity for 10 years of nominal operation at 1.82 MW of fission power and is subcritical at launch and in all water submersion accidents.

King, Jeffrey C.; El-Genk, Mohamed S. [Institute for Space and Nuclear Power Studies, University of New Mexico, Albuquerque, NM 87131 (United States); Chemical and Nuclear Engineering Department, University of New Mexico, Albuquerque, NM 87131 (United States)

2004-02-04T23:59:59.000Z

359

A methodology to identify material properties in layered visoelastic halfspaces  

E-Print Network [OSTI]

UNKNOWN SYSTEM OUTPUT D FORWARD MODEL MODEL: M + \\ CI ~ OUTPUT ERROR a) NOISE INPUT + INPUT ERROR~ UNKNOWN SYSTEM INVERSE MODEL: M I OUTPUT D INVERSE MODEL b) NOISE UNK NOWN SYSTEM OUTPUT GENERALIZE= MODEL Ml M -I 2 c) GENERALIZED... that displacements vary linearly within each sublayer 25 Sensors 0 1 2 3 4 5 6 Layer 1 E' - E'(v) + iE" (v) E?m, t, Layer 2 E' E'(v) (1 + i8) E?8? t, Layer 3 E' E'(8) (1 + i8) E?8? t, Halfspace E* - E'(m) (1 + i8) E? 8? ~ Figure 4 Schematic...

Torpunuri, Vikram Simha

1990-01-01T23:59:59.000Z

360

Expert opinion in risk analysis; The NUREG-1150 methodology  

SciTech Connect (OSTI)

Risk analysis of nuclear power generation often requires the use of expert opinion to provide probabilistic inputs where other sources of information are unavailable or are not cost effective. In the Reactor Rise Reference Document (NUREG-1150), a methodology for the collection of expert opinion was developed. The resulting methodology presented by the author involves a ten-step process: selection of experts, selection of issues, preparation of issue statements, elicitation training, preparation of expert analyses by panel members, discussion of analyses, elicitation, recomposition and aggregation, and review by the panel members. These steps were implemented in a multiple meeting format that brought together experts from a variety of work places.

Hora, S.C.; Iman, R.L. (Sandia National Labs., Albuquerque, NM (USA))

1989-08-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

Numerical Methodology to Evaluate Fast Reactor Sodium Combustion  

SciTech Connect (OSTI)

In the present study, a numerical methodology for sodium combustion has been developed for the safety evaluation of a liquid-metal-cooled fast reactor. The methodology includes a fast-running zone model computer program for safety evaluation, a field model program for multidimensional thermal hydraulics, and a chemical reaction analysis program based on chemical equilibrium theory. Two recently performed experiments have been analyzed using the computer programs, and the numerical results are in good agreement with the experiments. Although sodium combustion is a complex phenomenon, use of these computer programs gives better understanding of the coupled thermal hydraulics and chemical reaction.

Yamaguchi, Akira; Takata, Takashi; Okano, Yasushi [Japan Nuclear Cycle Development Institute (Japan)

2001-12-15T23:59:59.000Z

362

Short-term energy outlook. Volume 2. Methodology  

SciTech Connect (OSTI)

This volume updates models and forecasting methodologies used and presents information on new developments since November 1981. Chapter discusses the changes in forecasting methodology for motor gasoline demand, electricity sales, coking coal, and other petroleum products. Coefficient estimates, summary statistics, and data sources for many of the short-term energy models are provided. Chapter 3 evaluates previous short-term forecasts for the macroeconomic variables, total energy, petroleum supply and demand, coal consumption, natural gas, and electricity fuel shares. Chapter 4 reviews the relationship of total US energy consumption to economic activity between 1960 and 1981.

Not Available

1982-05-01T23:59:59.000Z

363

An optimally designed stack effluent sampling system with transpiration for active transmission enhancement  

E-Print Network [OSTI]

) standard number N13. 1 for sampling methodology that is to be used at locations selected by the methodologies of EPA Method l. ANSI N13. 1 requires the use of sharp-edged isokinetic probes if particles larger than 5 Itm are anticipated to be present..., there is minimal effect on transmission. Prototype Equipment Certification Various tests were performed on the prototype CEM-SETS to insure it's field worthiness. One critical test was the leak test. The current methodology used in the EPA Methods 5 and 17...

Schroeder, Troy J.

1995-01-01T23:59:59.000Z

364

T-609: Adobe Acrobat/Reader Memory Corruption Error in CoolType...  

Broader source: Energy.gov (indexed) [DOE]

09: Adobe AcrobatReader Memory Corruption Error in CoolType Library Lets Remote Users Execute Arbitrary Code T-609: Adobe AcrobatReader Memory Corruption Error in CoolType...

365

Cognitive analysis of students' errors and misconceptions in variables, equations, and functions  

E-Print Network [OSTI]

such issues, three basic algebra concepts - variable, equation, and function – are used to analyze students’ errors, possible buggy algorithms, and the conceptual basis of these errors: misconceptions. Through the research on these three basic concepts...

Li, Xiaobao

2009-05-15T23:59:59.000Z

366

Recompile if your codes run into MPICH error after the maintenance...  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

Recompile if your codes run into MPICH errors after the maintenance on 6252014 Recompile if your codes run into MPICH error after the maintenance on 6252014 June 27, 2014 (0...

367

Design error diagnosis and correction in digital circuits  

E-Print Network [OSTI]

, each primary output would impose a con- straint on the on-set and off-set. These constraints should be combined together to derive the final on-set and off-set of the new function. Proposition 2: [9, 18, 17] Let i be the index of the primary outputs... to this equation are deleted. The work in [17] is also based on Boolean comparisons and applies to multiple errors. Overall, their method does not guarantee a solution. Test-vector simulation methods proposed for the DEDC problem include [20, 22, 26]. In [20...

Nayak, Debashis

1998-01-01T23:59:59.000Z

368

Reducing Quantum Errors and Improving Large Scale Quantum Cryptography  

E-Print Network [OSTI]

Noise causes severe difficulties in implementing quantum computing and quantum cryptography. Several schemes have been suggested to reduce this problem, mainly focusing on quantum computation. Motivated by quantum cryptography, we suggest a coding which uses $N$ quantum bits ($N=n^2$) to encode one quantum bit, and reduces the error exponentially with $n$. Our result suggests the possibility of distributing a secure key over very long distances, and maintaining quantum states for very long times. It also provides a new quantum privacy amplification against a strong adversary.

T. Mor

1996-08-15T23:59:59.000Z

369

Topological Quantum Computation and Error Correction by Biological Cells  

E-Print Network [OSTI]

A Topological examination of phospholipid dynamics in the Far from Equilibrium state has demonstrated that metabolically active cells use waste heat to generate spatially patterned membrane flows by forced convection and shear. This paper explains the resemblance between this nonlinear membrane model and Witten Kitaev type Topological Quantum Computation systems, and demonstrates how this self-organising membrane enables biological cells to circumvent the decoherence problem, perform error correction procedures, and produce classical level output as shielded current flow through cytoskeletal protein conduit. Cellular outputs are shown to be Turing compatible as they are determined by computable in principle hydromagnetic fluid flows, and importantly, are Adaptive from an Evolutionary perspective.

J T Lofthouse

2005-02-02T23:59:59.000Z

370

Error 401 on upload? | OpenEI Community  

Open Energy Info (EERE)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home5b9fcbce19 No revision has beenFfe2fb55-352f-473b-a2dd-50ae8b27f0a6 NoSan Leandro,Law and Policy CenterTODO:Erie CountyError 401

371

Error Estimation for High Speed Flows Using Continuous and Discrete Adjoints  

E-Print Network [OSTI]

the fullest extent possible) strategy to control the error in multi-physics simulations of Scramjet propulsion

Alonso, Juan J.

372

Local Estimation of Modeling Error in Multi-Scale Modeling of Heterogeneous Elastic Solids  

E-Print Network [OSTI]

Global Enhancement . . . . . . . . . . . . . . . . . . . . 24 3.3.3 The Adaptive Process . . . . . . . . . . . . . . . . . . . 24 Chapter 4. Modeling Error Estimation 30 4.1 Residual-Based Error Estimation . . . . . . . . . . . . . . . . . 30 4.2 Error... with effectivity indices . 80 ix 5.18 Average estimates of enhancement errors in quantity of interest Qx(u) using three enhancements with effectivity indices . . . . 80 x List of Figures 1.1 Examples of composite failure modes . . . . . . . . . . . . . . 4 2...

Moody, Tristan

2008-03-19T23:59:59.000Z

373

Using Graphs for Fast Error Term Approximation of Time-varying Datasets  

SciTech Connect (OSTI)

We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.

Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I

2003-02-27T23:59:59.000Z

374

Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors  

E-Print Network [OSTI]

BAYESIAN SEMIPARAMETRIC DENSITY DECONVOLUTION AND REGRESSION IN THE PRESENCE OF MEASUREMENT ERRORS A Dissertation by ABHRA SARKAR Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment... Copyright 2014 Abhra Sarkar ABSTRACT Although the literature on measurement error problems is quite extensive, so- lutions to even the most fundamental measurement error problems like density de- convolution and regression with errors...

Sarkar, Abhra

2014-06-24T23:59:59.000Z

375

Maintaining Standards: Differences between the Standard Deviation and Standard Error, and  

E-Print Network [OSTI]

Maintaining Standards: Differences between the Standard Deviation and Standard Error, and When to Use Each David L Streiner, PhD1 Many people confuse the standard deviation (SD) and the standard error of the findings. (Can J Psychiatry 1996;41:498­502) Key Words: statistics, standard deviation, standard error

California at Santa Cruz, University of

376

TECHNICAL ADVANCES Dye shift: a neglected source of genotyping error in molecular  

E-Print Network [OSTI]

TECHNICAL ADVANCES Dye shift: a neglected source of genotyping error in molecular ecology JOLENE T for genotyping error, yet potential errors stemming from dye-induced mobility shift (dye shift) may be frequently left uncorrected, dye shift can lead to mis-scoring alleles and even to fal- sely calling new alleles

Jamieson, Ian

377

Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences  

E-Print Network [OSTI]

of air pollution and health. Because measurement error may have substantial implications for interpreting1 Exposure Measurement Error in Time-Series Studies of Air Pollution: Concepts and Consequences S in time-series studies 1 11/11/99 Keywords: measurement error, air pollution, time series, exposure

Dominici, Francesca

378

Database Error Trapping and Prediction Mike West & Robert L. Winkler \\Lambda  

E-Print Network [OSTI]

Database Error Trapping and Prediction By Mike West & Robert L. Winkler \\Lambda Duke University of errors in databases. In particular, we study two error detection methods. In the duplicate performance method, all items in a database are processed by two individuals (or machines), and the resulting records

West, Mike

379

Error Analysis of Ia Supernova and Query on Cosmic Dark Energy  

E-Print Network [OSTI]

Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than $0.55^m$, so we can't decide whether the universe is accelerating expansion or not.

Qiuhe Peng; Yiming Hu; Kun Wang; Yu Liang

2012-01-16T23:59:59.000Z

380

Effects of imbalance and geometric error on precision grinding machines  

SciTech Connect (OSTI)

To study balancing in grinding, a simple mechanical system was examined. It was essential to study such a well-defined system, as opposed to a large, complex system such as a machining center. The use of a compact, well-defined system enabled easy quantification of the imbalance force input, its phase angle to any geometric decentering, and good understanding of the machine mode shapes. It is important to understand a simple system such as the one I examined given that imbalance is so intimately coupled to machine dynamics. It is possible to extend the results presented here to industrial machines, although that is not part of this work. In addition to the empirical testing, a simple mechanical system to look at how mode shapes, balance, and geometric error interplay to yield spindle error motion was modelled. The results of this model will be presented along with the results from a more global grinding model. The global model, presented at ASPE in November 1996, allows one to examine the effects of changing global machine parameters like stiffness and damping. This geometrically abstract, one-dimensional model will be presented to demonstrate the usefulness of an abstract approach for first-order understanding but it will not be the main focus of this thesis. 19 refs., 36 figs., 10 tables.

Bibler, J.E.

1997-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

Discrete Sampling Test Plan for the 200-BP-5 Operable Unit  

SciTech Connect (OSTI)

The Discrete Groundwater Sampling Project is conducted by the Pacific Northwest National Laboratory (PNNL) on behalf of CH2M HILL Plateau Remediation Company. The project is focused on delivering groundwater samples from proscribed horizons within select groundwater wells residing in the 200-BP-5 Operable Unit (200-BP-5 OU) on the Hanford Site. This document provides the scope, schedule, methodology, and other details of the PNNL discrete sampling effort.

Sweeney, Mark D.

2010-02-04T23:59:59.000Z

382

MOUNTAIN WEATHER PREDICTION: PHENOMENOLOGICAL CHALLENGES AND FORECAST METHODOLOGY  

E-Print Network [OSTI]

MOUNTAIN WEATHER PREDICTION: PHENOMENOLOGICAL CHALLENGES AND FORECAST METHODOLOGY Michael P. Meyers of the American Meteorological Society Mountain Weather and Forecasting Monograph Draft from Friday, May 21, 2010 of weather analysis and forecasting in complex terrain with special emphasis placed on the role of humans

Steenburgh, Jim

383

A Polygon-based Methodology for Mining Related Spatial Datasets  

E-Print Network [OSTI]

, such as countries, and in that they can be used for the modeling of spatial events, such as air pollution. MoreoverA Polygon-based Methodology for Mining Related Spatial Datasets Sujing Wang, Chun-Sheng Chen clusters. This paper claims that polygon analysis is particularly useful for mining related, spatial

Eick, Christoph F.

384

Introduction Data Methodology Liquidity Hoarding in the Interbank Market  

E-Print Network [OSTI]

(Armantier & Copeland, 2012) · Or transactions from only a part of the market (eMid) · Secured lendingIntroduction Data Methodology Liquidity Hoarding in the Interbank Market: Evidence from Mexican Interbank Overnight Loan and Repo Transactions Marco J. van der Leij1 Seraf´in Mart´inez-Jaramillo2 Jos

Wirosoetisno, Djoko

385

ORIGINAL PAPER Review of Methodologies for Offshore Wind Resource  

E-Print Network [OSTI]

ORIGINAL PAPER Review of Methodologies for Offshore Wind Resource Assessment in European Seas A. M offshore is generally larger than at geographically nearby onshore sites, which can offset the higher installation, operation and maintenance costs associated with offshore wind parks. Successful offshore wind

Pryor, Sara C.

386

PROJECT SELF-EVALUATION METHODOLOGY: THE HEALTHREATS PROJECT CASE STUDY  

E-Print Network [OSTI]

PROJECT SELF-EVALUATION METHODOLOGY: THE HEALTHREATS PROJECT CASE STUDY Martin Znidarsic1 , Marko presents an approach to self-evaluation in collaborative research projects. The approach is taken from a case study of the project Healthreats, where it is used in practice. Aims and focuses of self

Bohanec, Marko

387

A methodology for simultaneous modeling and control of chemical processes  

E-Print Network [OSTI]

Feedback Control System The Methodology IV APPLICATION TO A TEXTBOOK PROBLEM IMC Controller Structure RLS Algorithm Design Method. Linear Model Description . Simulation with Different Initial System Output Values . . . . Simulation with Different... and Initial System Output Values. Simulation with Different Disturbance Gains. . . . VI CASE STUDY: APPLICATION OF THIS FEEDBACK SYSTEM TO A TENNESSEE EASTMAN TESTBED PROBLEM, . . . Problem Description. Reactor Control and Process Identification. VII...

Zeng, Tong

1995-01-01T23:59:59.000Z

388

Navigocorpus: A Database for Shipping Information A Methodological and  

E-Print Network [OSTI]

of Maritime History XXIII, 2 (2011) 241-262" #12;Jean-Pierre Dedieu, et al. the database on-line beginningNavigocorpus: A Database for Shipping Information ­ A Methodological and Technical Introduction and stored them in databases which are generally organized according to the nature of the sources used

Paris-Sud XI, Université de

389

RETI Resource Valuation Methodology Cost of Generation Calculator  

E-Print Network [OSTI]

) · Cost of equity investment in capital · Cost of financing capital · Taxes, including investmentRETI Resource Valuation Methodology Cost of Generation Calculator The Cost of Generation Calculator determines the levelized cost of generating power over the life of the resource, and is an input

390

Sandia software guidelines: Volume 5, Tools, techniques, and methodologies  

SciTech Connect (OSTI)

This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. This volume describes software tools and methodologies available to Sandia personnel for the development of software, and outlines techniques that have proven useful within the Laboratories and elsewhere. References and evaluations by Sandia personnel are included. 6 figs.

Not Available

1989-07-01T23:59:59.000Z

391

SOC Testing Methodology and Practice Cheng-Wen Wu  

E-Print Network [OSTI]

SOC Testing Methodology and Practice Cheng-Wen Wu Department of Electrical Engineering National- troller chip we practice a novel SOC test integration plat- form, solving real problems in test scheduling, test IO reduc- tion, timing of functional test, scan IO sharing, embedded memory built-in self-test

Boyer, Edmond

392

Numerical Optimization Methodology for the Design of Power Equipment  

E-Print Network [OSTI]

health effects from overhead power line conductors [6,7]. Even though no conclusive evidence has been of utility companies have been involved in a redesign of their overhead transmission lines to avoid potentialNumerical Optimization Methodology for the Design of Power Equipment Gaurav Tewari* My M. Hua

Mamishev, Alexander

393

Does help help? Introducing the Bayesian Evaluation and Assessment methodology  

E-Print Network [OSTI]

Does help help? Introducing the Bayesian Evaluation and Assessment methodology Joseph E. Beck1--and how--help helps students has not been a well studied problem in the ITS community. In this paper we present three approaches for evaluating the efficacy of the Reading Tutor's help: creating experimental

Mostow, Jack

394

A METHODOLOGY FOR IDENTIFICATION OF NARMAX MODELS APPLIED TO DIESEL  

E-Print Network [OSTI]

A METHODOLOGY FOR IDENTIFICATION OF NARMAX MODELS APPLIED TO DIESEL ENGINES 1 Gianluca Zito ,2 Ioan is illustrated by means of an automotive case study, namely a variable geometry turbocharged diesel engine identification procedure is illustrated. In section 3 a diesel engine system, used to test the procedure

Paris-Sud XI, Université de

395

Regional issue identification and assessment: study methodology. First annual report  

SciTech Connect (OSTI)

The overall assessment methodologies and models utilized for the first project under the Regional Issue Identification and Assessment (RIIA) program are described. Detailed descriptions are given of the methodologies used by lead laboratories for the quantification of the impacts of an energy scenario on one or more media (e.g., air, water, land, human and ecology), and by all laboratories to assess the regional impacts on all media. The research and assessments reflected in this document were performed by the following national laboratories: Argonne National Laboratory; Brookhaven National Laboratory; Lawrence Berkeley Laboratory; Los Alamos Scientific Laboratory; Oak Ridge National Laboratory; and Pacific Northwest Laboratory. This report contains five chapters. Chapter 1 briefly describes the overall study methodology and introduces the technical participants. Chapter 2 is a summary of the energy policy scenario selected for the RIIA I study and Chapter 3 describes how this scenario was translated into a county-level siting pattern of energy development. The fourth chapter is a detailed description of the individual methodologies used to quantify the environmental and socioeconomic impacts of the scenario while Chapter 5 describes how these impacts were translated into comprehensive regional assessments for each Federal Region.

Not Available

1980-01-01T23:59:59.000Z

396

Architecture Rationalization: A Methodology for Architecture Verifiability, Traceability and Completeness  

E-Print Network [OSTI]

Architecture Rationalization: A Methodology for Architecture Verifiability, Traceability-mail: {atang, jhan}@it.swin.edu.au Abstract Architecture modeling is practiced extensively in the software of architecture designs. Deficiencies in any of these three areas in an architecture model can be costly and risky

Han, Jun

397

Methodology for extracting local constants from petroleum cracking flows  

DOE Patents [OSTI]

A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

Chang, Shen-Lin (Woodridge, IL); Lottes, Steven A. (Naperville, IL); Zhou, Chenn Q. (Munster, IN)

2000-01-01T23:59:59.000Z

398

Implementing an ICE: A methodology for the design, development  

E-Print Network [OSTI]

Implementing an ICE: A methodology for the design, development and installation of Interactive for the design, development and implementation of ICEs (Interactive Collaborative Environments) in real world with the Edinburgh Napier ICE, a multi-user, multi-surface, multi-touch blended interaction digitally augmented space

Deussen, Oliver

399

Problems addressed in this course Teaching methodology, material, exams, contacts  

E-Print Network [OSTI]

. The first look at a genome - Sequence analysis Bioinformatics - Lecture 1 Louis Wehenkel Department. The first look at a genome - Sequence analysis Problems addressed in this course Teaching methodology, material, exams, contacts Chapter 1. The first look at a genome - Sequence analysis Introduction

Wehenkel, Louis

400

Sampled data lattice filters  

E-Print Network [OSTI]

SAMPLED DATA LATTICE FILTERS A Thesis by WILLIAM TERRY THRIFT III Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 1979 Major Subfect...: Electrical Engineering SAMPLED DATA LATTICE FILTERS A Thesis by WILLIAM TERRY THRIFT III Approved as to style and content by: (Chair an of Committee) (Hea f Department) (Member) (Member) (Member) (Member) December 1979 ABSTRACT Sampled Data...

Thrift, William Terry

1980-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

National Certification Methodology for the Nuclear Weapons Stockpile  

SciTech Connect (OSTI)

Lawrence Livermore and Los Alamos National Laboratories have developed a common framework and key elements of a national certification methodology called Quantification of Margins and Uncertainties (QMU). A spectrum from senior managers to weapons designers has been engaged in this activity at the two laboratories for on the order of a year to codify this methodology in an overarching and integrated paper. Following is the certification paper that has evolved. In the process of writing this paper, an important outcome has been the realization that a joint Livermore/Los Alamos workshop on QMU, focusing on clearly identifying and quantifying differences between approaches between the two labs plus developing an even stronger technical foundation on methodology, will be valuable. Later in FY03, such a joint laboratory workshop will be held. One of the outcomes of this workshop will be a new version of this certification paper. A comprehensive approach to certification must include specification of problem scope, development of system baseline models, formulation of standards of performance assessment, and effective procedures for peer review and documentation. This document concentrates on the assessment and peer review aspects of the problem. In addressing these points, a central role is played by a 'watch list' for weapons derived from credible failure modes and performance gate analyses. The watch list must reflect our best assessment of factors that are critical to weapons performance. High fidelity experiments and calculations as well as full exploitation of archival test data are essential to this process. Peer review, advisory groups and red teams play an important role in confirming the validity of the watch list. The framework for certification developed by the Laboratories has many basic features in common, but some significant differences in the detailed technical implementation of the overall methodology remain. Joint certification workshops held in June and December of 2001 and continued in 2002 have proven useful in developing the methodology, and future workshops should prove useful in further refining this framework. Each laboratory developed an approach to certification with some differences in detailed implementation. The general methodology introduces specific quantitative indicators for assessing confidence in our nuclear weapon stockpile. The quantitative indicators are based upon performance margins for key operating characteristics and components of the system, and these are compared to uncertainties in these factors. These criteria can be summarized in a quantitative metric (for each such characteristic) expressed as: (i.e., confidence in warhead performance depends upon CR significantly exceeding unity for all these characteristics). These Confidence Ratios are proposed as a basis for guiding technical and programmatic decisions on stockpile actions. This methodology already has been deployed in certifying weapons undergoing current life extension programs or component remanufacture. The overall approach is an adaptation of standard engineering practice and lends itself to rigorous, quantitative, and explicit criteria for judging the robustness of weapon system and component performance at a detailed level. There are, of course, a number of approaches for assessing these Confidence Ratios. The general certification methodology was publicly presented for the first time to a meeting of Strategic Command SAG in January 2002 and met with general approval. At that meeting, the Laboratories committed to further refine and develop the methodology through the implementation process. This paper reflects the refinement and additional development to date. There will be even further refinement at a joint laboratory workshop later in FY03. A common certification methodology enables us to engage in peer reviews and evaluate nuclear weapon systems on the basis of explicit and objective metrics. The clarity provided by such metrics enables each laboratory and our common customers to understand the meaning and logic

Goodwin, B T; Juzaitis, R J

2006-08-07T23:59:59.000Z

402

ANNUAL SEDIMENT AND MERCURY LOADS WITH STANDARD ERROR AT MALLARD ISLAND, CA SAMPLING LOCATION FROM 1995 TO 2005.  

E-Print Network [OSTI]

mercury may vary drastically in years with larger floods, when the Yolo Bypass carries floodwater from several northern California water- ways to the Sacramento River. The Yolo Bypass is a levied 59,000 acre, with greater proportions of water passing through the Yolo Bypass. The highest dayflow that was measured

403

Enhancing the Benefit of the Chemical Mixture Methodology: A Report on Methodology Testing and Potential Approaches for Improving Performance  

SciTech Connect (OSTI)

Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.

Yu, Xiao-Ying; Yao, Juan; He, Hua; Glantz, Clifford S.; Booth, Alexander E.

2012-01-01T23:59:59.000Z

404

Rehabilitation Services Sample Occupations  

E-Print Network [OSTI]

/Industries Correction Agencies Drug Treatment Centers Addiction Counselor Advocacy Occupations Art Therapist BehavioralRehabilitation Services Sample Occupations Sample Work Settings Child & Day Care Centers Clinics................................ IIB 29-1000 E4 Careers in Counseling and Human Services .........IIB 21-1010 C7 Careers in Health Care

Ronquist, Fredrik

405

Sampling system and method  

DOE Patents [OSTI]

The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

2013-04-16T23:59:59.000Z

406

Biological sample collector  

DOE Patents [OSTI]

A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

Murphy, Gloria A. (French Camp, CA)

2010-09-07T23:59:59.000Z

407

SAMPLE: Parity Violating Electron Scattering from Hydrogen and Deuterium  

E-Print Network [OSTI]

Recently, there has been considerable theoretical interest in determining strange quark contributions to hadronic matrix elements. Such matrix elements can be accessed through the nucleon's neutral weak form factors as determined in parity violating electron scattering. The SAMPLE experiment will measure the strange magnetic form factor $G_M^s$ at low momentum transfer. By combining measurements from hydrogen and deuterium the theoretical uncertainties in the measurement can be greatly reduced and the result will be limited by experimental errors only. A summary of recent progress on the SAMPLE experiment is presented.

E. J. Beise; J. Arrington; D. H. Beck; E. Candell; R. Carr; G. Dodson; K. Dow; F. Duncan; M. Farkhondeh; B. W. Filippone; T. FOrest; H. Gao; W. Korsch; S. Kowalski; A. Lung; R. D. McKeown; R. Mohring; B. A. Mueller; J. Napolitano; M. Pitt; N. Simicevic; E. Tsentalovich; S. Wells

1996-02-06T23:59:59.000Z

408

Coordinated joint motion control system with position error correction  

DOE Patents [OSTI]

Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

Danko, George (Reno, NV)

2011-11-22T23:59:59.000Z

409

Significance of gauge line error in orifice measurement  

SciTech Connect (OSTI)

Pulsation induced gauge line amplification can cause errors in the recorded differential signal used to calculate flow. Its presence may be detected using dual transmitters (one connected at the orifice taps, the other at the end of the gauge lines) and comparing the relative peak to peak amplitudes. Its affect on recorded differential may be determined by averaging both signals with a PC based data acquisition and analysis system. Remedial action is recommended in all cases where amplification is detected. Use of close connect, full opening manifolds, is suggested to decouple the gauge lines` resonant frequency from that of the excitation`s, by positioning the recording device as close to the process signal`s origin as possible.

Bowen, J.W. [ANR Pipeline Co., Detroit, MI (United States)

1995-12-01T23:59:59.000Z

410

Analysis of a small sample geometry for concurrent identification and quantification of mixed-nuclide samples  

E-Print Network [OSTI]

, were found to be useful in certain situations. Small errors were associated with the gamma emitting radionuclides while larger errors were found with the other radionuclides. The large uncertainty of the Cs-137 activity probably led to a large error...

Krieger, Kenneth Vincent

1999-01-01T23:59:59.000Z

411

To the low-temperature technologies methodology: the clean superconductor free energy fluctuations calculation in the micro- and macrostructures descriptions of superconductor  

E-Print Network [OSTI]

The Ginzburg - Landau theory is used for the superconducting structures free energy fluctuations study. On its basis, we have defined the value of the heat capacity jump in the macroscopic zero-dimensional sample and in the zero-dimensional microstructures ensemble of the total volume equal to the macroscopic sample volume. The inference is made that in the Ginzburg - Landau methodology frameworks, it is essential to take into account the superconducting clean sample effective dimensionality only on the last stage of its thermodynamical characteristics calculation.

Tolbatov, Iogann

2009-01-01T23:59:59.000Z

412

To the low-temperature technologies methodology: the clean superconductor free energy fluctuations calculation in the micro- and macrostructures descriptions of superconductor  

E-Print Network [OSTI]

The Ginzburg - Landau theory is used for the superconducting structures free energy fluctuations study. On its basis, we have defined the value of the heat capacity jump in the macroscopic zero-dimensional sample and in the zero-dimensional microstructures ensemble of the total volume equal to the macroscopic sample volume. The inference is made that in the Ginzburg - Landau methodology frameworks, it is essential to take into account the superconducting clean sample effective dimensionality only on the last stage of its thermodynamical characteristics calculation.

Iogann Tolbatov

2009-10-24T23:59:59.000Z

413

Waste classification sampling plan  

SciTech Connect (OSTI)

The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998.

Landsman, S.D.

1998-05-27T23:59:59.000Z

414

Cost Methodology for Biomass Feedstocks: Herbaceous Crops and Agricultural Residues  

SciTech Connect (OSTI)

This report describes a set of procedures and assumptions used to estimate production and logistics costs of bioenergy feedstocks from herbaceous crops and agricultural residues. The engineering-economic analysis discussed here is based on methodologies developed by the American Society of Agricultural and Biological Engineers (ASABE) and the American Agricultural Economics Association (AAEA). An engineering-economic analysis approach was chosen due to lack of historical cost data for bioenergy feedstocks. Instead, costs are calculated using assumptions for equipment performance, input prices, and yield data derived from equipment manufacturers, research literature, and/or standards. Cost estimates account for fixed and variable costs. Several examples of this costing methodology used to estimate feedstock logistics costs are included at the end of this report.

Turhollow Jr, Anthony F [ORNL; Webb, Erin [ORNL; Sokhansanj, Shahabaddine [ORNL

2009-12-01T23:59:59.000Z

415

Implementation impacts of PRL methodology. [PRL (Plutonium Recovery Limit)  

SciTech Connect (OSTI)

This report responds to a DOE-SR request to evaluate the impacts from implementation of the proposed Plutonium Recovery Limit (PRL) methodology. The PRL Methodology is based on cost minimization for decisions to discard or recover plutonium contained in scrap, residues, and other plutonium bearing materials. Implementation of the PRL methodology may result in decisions to declare as waste certain plutonium bearing materials originally considered to be a recoverable plutonium product. Such decisions may have regulatory impacts, because any material declared to be waste would immediately be subject to provisions of the Resource Conservation and Recovery Act (RCRA). The decision to discard these materials will have impacts on waste storage, treatment, and disposal facilities. Current plans for the de-inventory of plutonium processing facilities have identified certain materials as candidates for discard based upon the economic considerations associated with extending the operating schedules for recovery of the contained plutonium versus potential waste disposal costs. This report evaluates the impacts of discarding those materials as proposed by the F Area De-Inventory Plan and compares the De-Inventory Plan assessments with conclusions from application of the PRL. The impact analysis was performed for those materials proposed as potential candidates for discard by the De-Inventory Plan. The De-Inventory Plan identified 433 items, containing approximately 1% of the current SRS Pu-239 inventory, as not appropriate for recovery as the site moves to complete the mission of F-Canyon and FB-Line. The materials were entered into storage awaiting recovery as product under the Department's previous Economic Discard Limit (EDL) methodology which valued plutonium at its incremental cost of production in reactors. An application of Departmental PRLs to the subject 433 items revealed that approximately 40% of them would continue to be potentially recoverable as product plutonium.

Caudill, J.A.; Krupa, J.F.; Meadors, R.E.; Odum, J.V.; Rodrigues, G.C.

1993-02-01T23:59:59.000Z

416

Enzyme and methodology for the treatment of a biomass  

DOE Patents [OSTI]

An enzyme isolated from an extremophilic microbe, and a method for utilizing same is described, and wherein the enzyme displays optimum enzymatic activity at a temperature of greater than about 80.degree. C., and a pH of less than about 2, and further may be useful in methodology including pretreatment of a biomass so as to facilitate the production of an end product.

Thompson, Vicki S.; Thompson, David N.; Schaller, Kastli D.; Apel, William A.

2010-06-01T23:59:59.000Z

417

Communicating about feminism and implementing feminist practices in research methodology  

E-Print Network [OSTI]

illustrate the power that the negative stigma surrounding feminism has on individuals. Even many people who support feminist values feel the need to separate themselves from the feminist movement by refusing to call themselves "feminists. " Although... researchers advocate other methodological changes that make thc rcscarch process more consistent with feminist values. They oflen find the imbalance of power that exists between the researcher and research participants to be one of the most serious problems...

Barnard, Megan

2001-01-01T23:59:59.000Z

418

Northern Marshall Islands radiological survey: sampling and analysis summary  

SciTech Connect (OSTI)

A radiological survey was conducted in the Northern Marshall Islands to document reamining external gamma exposures from nuclear tests conducted at Enewetak and Bikini Atolls. An additional program was later included to obtain terrestrial and marine samples for radiological dose assessment for current or potential atoll inhabitants. This report is the first of a series summarizing the results from the terrestrial and marine surveys. The sample collection and processing procedures and the general survey methodology are discussed; a summary of the collected samples and radionuclide analyses is presented. Over 5400 samples were collected from the 12 atolls and 2 islands and prepared for analysis including 3093 soil, 961 vegetation, 153 animal, 965 fish composite samples (average of 30 fish per sample), 101 clam, 50 lagoon water, 15 cistern water, 17 groundwater, and 85 lagoon sediment samples. A complete breakdown by sample type, atoll, and island is given here. The total number of analyses by radionuclide are 8840 for /sup 241/Am, 6569 for /sup 137/Cs, 4535 for /sup 239 +240/Pu, 4431 for /sup 90/Sr, 1146 for /sup 238/Pu, 269 for /sup 241/Pu, and 114 each for /sup 239/Pu and /sup 240/Pu. A complete breakdown by sample category, atoll or island, and radionuclide is also included.

Robison, W.L.; Conrado, C.L.; Eagle, R.J.; Stuart, M.L.

1981-07-23T23:59:59.000Z

419

A General Methodology for Designing Self-Organizing Systems  

E-Print Network [OSTI]

Our technologies complexify our environments. Thus, new technologies need to deal with more and more complexity. Several efforts have been made to deal with this complexity using the concept of self-organization. However, in order to promote its use and understanding, we must first have a pragmatic understanding of complexity and self-organization. This paper presents a conceptual framework for speaking about self-organizing systems. The aim is to provide a methodology useful for designing and controlling systems developed to solve complex problems. First, practical notions of complexity and self-organization are given. Then, starting from the agent metaphor, a conceptual framework is presented. This provides formal ways of speaking about "satisfaction" of elements and systems. The main premise of the methodology claims that reducing the "friction" or "interference" of interactions between elements of a system will result in a higher "satisfaction" of the system, i.e. better performance. The methodology discusses different ways in which this can be achieved. A case study on self-organizing traffic lights illustrates the ideas presented in the paper.

Carlos Gershenson

2006-02-09T23:59:59.000Z

420

Methodology for assessing performance of waste management systems  

SciTech Connect (OSTI)

The purpose of the methodology provided in this report is to select the optimal way to manage particular sets of waste streams from generation to disposal in a safe and cost-effective manner. The methodology described is designed to review the entire waste management system, assess its performance, ensure that the performance objectives are met, compare different LLW management alternatives, and select the optimal alternative. The methodology is based on decision analysis approach, in which costs and risk are considered for various LLW management alternatives, a comparison of costs, risks, and benefits is made, and an optimal system is selected which minimizes costs and risks and maximizes benefits. A ''zoom-lens'' approach is suggested, i.e., one begins by looking at gross features and gradually proceeds to more and more detail. Performance assessment requires certain information about the characteristics of the waste streams and about the various components of the waste management system. Waste acceptance criteria must be known for each component of the waste management system. Performance assessment for each component requires data about properties of the waste streams and operational and design characteristics of the processing or disposal components. 34 refs., 2 figs., 1 tab.

Meshkov, N.K.; Herzenberg, C.L.; Camasta, S.F.

1988-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

Flammability Assessment Methodology Program Phase I: Final Report  

SciTech Connect (OSTI)

The Flammability Assessment Methodology Program (FAMP) was established to investigate the flammability of gas mixtures found in transuranic (TRU) waste containers. The FAMP results provide a basis for increasing the permissible concentrations of flammable volatile organic compounds (VOCs) in TRU waste containers. The FAMP results will be used to modify the ''Safety Analysis Report for the TRUPACT-II Shipping Package'' (TRUPACT-II SARP) upon acceptance of the methodology by the Nuclear Regulatory Commission. Implementation of the methodology would substantially increase the number of drums that can be shipped to the Waste Isolation Pilot Plant (WIPP) without repackaging or treatment. Central to the program was experimental testing and modeling to predict the gas mixture lower explosive limit (MLEL) of gases observed in TRU waste containers. The experimental data supported selection of an MLEL model that was used in constructing screening limits for flammable VOC and flammable gas concentrations. The MLEL values predicted by the model for individual drums will be utilized to assess flammability for drums that do not meet the screening criteria. Finally, the predicted MLEL values will be used to derive acceptable gas generation rates, decay heat limits, and aspiration time requirements for drums that do not pass the screening limits. The results of the program demonstrate that an increased number of waste containers can be shipped to WIPP within the flammability safety envelope established in the TRUPACT-II SARP.

C. A. Loehr; S. M. Djordjevic; K. J. Liekhus; M. J. Connolly

1997-09-01T23:59:59.000Z

422

Nuclear power plant simulation facility evaluation methodology: handbook. Volume 1  

SciTech Connect (OSTI)

This report is Volume 1 of a two-part document which describes a project conducted to develop a methodology to evaluate the acceptability of nuclear power plant (NPP) simulation facilities for use in the simulator-based portion of NRC's operator licensing examination. The proposed methodology is to be utilized during two phases of the simulation facility life-cycle, initial simulator acceptance and recurrent analysis. The first phase is aimed at ensuring that the simulator provides an accurate representation of the reference NPP. There are two components of initial simulator evaluation: fidelity assessment and a direct determination of the simulation facility's adequacy for operator testing. The second phase is aimed at ensuring that the simulation facility continues to accurately represent the reference plant throughout the life of the simulator. Recurrent evaluation is comprised of three components: monitoring reference plant changes, monitoring the simulator's hardware, and examining the data from actual plant transients as they occur. Volume 1 is a set of guidelines which details the steps involved in the two life-cycle phases, presents an overview of the methodology and data collection requirements, and addresses the formation of the evaluation team and the preparation of the evaluation plan. 29 figs.

Laughery, K.R. Jr.; Carter, R.J.; Haas, P.M.

1986-01-01T23:59:59.000Z

423

Water Sample Concentrator  

ScienceCinema (OSTI)

Automated portable device that concentrates and packages a sample of suspected contaminated water for safe, efficient transport to a qualified analytical laboratory. This technology will help safeguard against pathogen contamination or chemical and biolog

Idaho National Laboratory

2010-01-08T23:59:59.000Z

424

Dissolution actuated sample container  

DOE Patents [OSTI]

A sample collection vial and process of using a vial is provided. The sample collection vial has an opening secured by a dissolvable plug. When dissolved, liquids may enter into the interior of the collection vial passing along one or more edges of a dissolvable blocking member. As the blocking member is dissolved, a spring actuated closure is directed towards the opening of the vial which, when engaged, secures the vial contents against loss or contamination.

Nance, Thomas A.; McCoy, Frank T.

2013-03-26T23:59:59.000Z

425

SAMPLING AND ANALYSIS PROTOCOLS  

SciTech Connect (OSTI)

Radiological sampling and analyses are performed to collect data for a variety of specific reasons covering a wide range of projects. These activities include: Effluent monitoring; Environmental surveillance; Emergency response; Routine ambient monitoring; Background assessments; Nuclear license termination; Remediation; Deactivation and decommissioning (D&D); and Waste management. In this chapter, effluent monitoring and environmental surveillance programs at nuclear operating facilities and radiological sampling and analysis plans for remediation and D&D activities will be discussed.

Jannik, T; P Fledderman, P

2007-02-09T23:59:59.000Z

426

Liquid sampling system  

DOE Patents [OSTI]

A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed. 5 figs.

Larson, L.L.

1984-09-17T23:59:59.000Z

427

The white dwarf luminosity function. I. Statistical errors and alternatives  

E-Print Network [OSTI]

Over the years, several methods have been proposed to compute galaxy luminosity functions, from the most simple ones -counting sample objects inside a given volume- to very sophisticated ones -like the C- method, the STY method or the Choloniewski method, among others. However, only the V/Vmax method is usually employed in computing the white dwarf luminosity function and other methods have not been applied so far to the observational sample of spectroscopically identified white dwarfs. Moreover, the statistical significance of the white dwarf luminosity function has also received little attention and a thorough study still remains to be done. In this paper we study, using a controlled synthetic sample of white dwarfs generated using a Monte Carlo simulator, which is the statistical significance of the white dwarf luminosity function and which are the expected biases. We also present a comparison between different estimators for computing the white dwarf luminosity function. We find that for sample sizes large enough the V/Vmax method provides a reliable characterization of the white dwarf luminosity function, provided that the input sample is selected carefully. Particularly, the V/Vmax method recovers well the position of the cut-off of the white dwarf luminosity function. However, this method turns out to be less robust than the Choloniewski method when the possible incompletenesses of the sample are taken into account. We also find that the Choloniewski method performs better than the V/Vmax method in estimating the overall density of white dwarfs, but misses the exact location of the cut-off of the white dwarf luminosity function.

E. M. Geijo; S. Torres; J. Isern; E. Garcia-Berro

2006-03-22T23:59:59.000Z

428

Technical bases and guidance for the use of composite soil sampling for demonstrating compliance with radiological release criteria  

SciTech Connect (OSTI)

This guidance provides information on methodologies and the technical bases that licensees should consider for incorporating composite sampling strategies into final status survey (FSS) plans. In addition, this guidance also includes appropriate uses of composite sampling for generating the data for other decommissioning site investigations such as characterization or other preliminary site investigations.

Vitkus, Timothy J. [Oak Ridge Institute for Science and Education, Oak Ridge, TN (United States). Independent Environmental Assessment and Verification Program

2012-04-24T23:59:59.000Z

429

A New Methodology for Frequency Domain Analysis of Wave Energy Converters with Periodically Varying Physical Parameters  

E-Print Network [OSTI]

A New Methodology for Frequency Domain Analysis of Wave Energy Converters with Periodically Varying Methodology for Frequency Domain Analysis of Wave Energy Converters with Periodically Varying Physical of Mechanical Engineering) ABSTRACT Within a wave energy converter's operational bandwidth, device operation

Victoria, University of

430

Analysis Methodology for Large Organizations' Investments in Energy Retrofit of Buildings  

E-Print Network [OSTI]

This paper presents a formal methodology that supports large organizations' investments in energy retrofit of buildings. The methodology is a scalable modeling approach based on normative models and Bayesian calibration. Normative models are a light...

Heo, Y.; Augenbroe, G.

2011-01-01T23:59:59.000Z

431

An integrated methodology for the performance and reliability evaluation of fault-tolerant systems  

E-Print Network [OSTI]

This thesis proposes a new methodology for the integrated performance and reliability evaluation of embedded fault-tolerant systems used in aircraft, space, tactical, and automotive applications. This methodology uses a ...

Domínguez-García, Alejandro D. (Alejandro Dan)

2007-01-01T23:59:59.000Z

432

An Experimental Methodology to Evaluate Concept Generation Procedures Based on Quantitative Lifecycle Performance  

E-Print Network [OSTI]

This study presents an experimental methodology to measure how concept generation procedures can affect the anticipated lifecycle performance of engineering systems design concepts. The methodology is based on objective ...

Cardin, Michel-Alexandre

433

System Modeling, Analysis, and Optimization Methodology for Diesel Exhaust After-treatment Technologies  

E-Print Network [OSTI]

System Modeling, Analysis, and Optimization Methodology for Diesel Exhaust After;System Modeling, Analysis, and Optimization Methodology for Diesel Exhaust After-treatment Technologies Developing new aftertreatment technologies to meet emission regulations for diesel engines is a growing

de Weck, Olivier L.

434

Methodology for the Determination of Potential Energy Savings in Commercial Buildings  

E-Print Network [OSTI]

This paper describes a methodology to determine potential energy savings of buildings with limited information. This methodology is based upon the simplified energy analysis procedure of heating, ventilation and air condition (HVAC) systems...

Baltazar-Cervantes, J. C.; Claridge, D. E.

2007-01-01T23:59:59.000Z

435

A methodology for in-situ calibration of steam boiler instrumentation  

E-Print Network [OSTI]

This thesis presents a broadly useful diagnostic methodology to engineers and plant managers for finding the in-situ operating characteristics of power plant boilers when metered data is either missing or obviously erroneous. The methodology is able...

Wei, Guanghua

1997-01-01T23:59:59.000Z

436

The potential for reducing carbon emissions from increased efficiency : a general equilibrium methodology  

E-Print Network [OSTI]

This paper presents a methodology for analyzing the potential for reduction in carbon emissions through increased fuel efficiency and provides an illustration of the method. The methodology employed is a multisectoral, ...

Blitzer, Charles R.

1990-01-01T23:59:59.000Z

437

Minimum error discrimination between similarity-transformed quantum states  

SciTech Connect (OSTI)

Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreducible representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.

Jafarizadeh, M. A. [Department of Theoretical Physics and Astrophysics, University of Tabriz, Tabriz 51664 (Iran, Islamic Republic of); Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-1795 (Iran, Islamic Republic of); Research Institute for Fundamental Sciences, Tabriz 51664 (Iran, Islamic Republic of); Sufiani, R. [Department of Theoretical Physics and Astrophysics, University of Tabriz, Tabriz 51664 (Iran, Islamic Republic of); Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-1795 (Iran, Islamic Republic of); Mazhari Khiavi, Y. [Department of Theoretical Physics and Astrophysics, University of Tabriz, Tabriz 51664 (Iran, Islamic Republic of)

2011-07-15T23:59:59.000Z

438

A surrogate-based uncertainty quantification with quantifiable errors  

SciTech Connect (OSTI)

Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)

Bang, Y.; Abdel-Khalik, H. S. [North Carolina State Univ., Raleigh, NC 27695 (United States)

2012-07-01T23:59:59.000Z

439

Error analysis of nuclear forces and effective interactions  

E-Print Network [OSTI]

The Nucleon-Nucleon interaction is the starting point for ab initio Nuclear Structure and Nuclear reactions calculations. Those are effectively carried out via effective interactions fitting scattering data up to a maximal center of mass momentum. However, NN interactions are subjected to statistical and systematic uncertainties which are expected to propagate and have some impact on the predictive power and accuracy of theoretical calculations, regardless on the numerical accuracy of the method used to solve the many body problem. We stress the necessary conditions required for a correct and self-consistent statistical interpretation of the discrepancies between theory and experiment which enable a subsequent statistical error propagation and correlation analysis. We comprehensively discuss an stringent and recently proposed tail-sensitive normality test and provide a simple recipe to implement it. As an application, we analyze the deduced uncertainties and correlations of effective interactions in terms of Moshinsky-Skyrme parameters and effective field theory counterterms as derived from the bare NN potential containing One-Pion-Exchange and Chiral Two-Pion-Exchange interactions inferred from scattering data.

R. Navarro Perez; J. E. Amaro; E. Ruiz Arriola

2014-09-04T23:59:59.000Z

440

Subband coding of monochrome images over binary symmetric channels with error correction  

E-Print Network [OSTI]

for the degree of MASTER OF SCIENCE December 1992 Major Subject: Electrical Engineering SUBBAND CODING OF MONOCHROME IMAGES OVER BINARY SYMMETRIC CHANNELS WITH ERROR CORRECTION A Thesis by DENISE M. SHEPPARD Approved as to style and content by: V.... State diagram analysis of error recovery . B. Codebook design C. Performance results . 16 18 23 29 42 48 V VI ERROR CORRECTION A. Algorithm B. Performance Results CONCLUSION REFERENCES APPENDIX A . APPENDIX B . 52 54 61 67 70...

Sheppard, Denise M

2012-06-07T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Method and apparatus for detecting timing errors in a system oscillator  

DOE Patents [OSTI]

A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

Gliebe, Ronald J. (Library, PA); Kramer, William R. (Bethel Park, PA)

1993-01-01T23:59:59.000Z

442

Fluid sampling system  

DOE Patents [OSTI]

An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to be decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank. 4 figs.

Houck, E.D.

1994-10-11T23:59:59.000Z

443

Fluid sampling system  

DOE Patents [OSTI]

An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank.

Houck, Edward D. (Idaho Falls, ID)

1994-01-01T23:59:59.000Z

444

Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 0  

SciTech Connect (OSTI)

The purpose of the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan) is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP). The Plan’s scope comprises sample collection and analysis requirements relevant to assessing the extent of groundwater contamination from underground nuclear testing. This Plan identifies locations to be sampled by corrective action unit (CAU) and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well-purging requirements, detection levels, and accuracy requirements; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling of interest to UGTA. This Plan does not address compliance with requirements for wells that supply the NNSS public water system or wells involved in a permitted activity.

Marutzky, Sam; Farnham, Irene

2014-10-01T23:59:59.000Z

445

Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip Arthroplasty  

E-Print Network [OSTI]

Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip. Wire markers are typically attached to the polyethylene acetabular component of the prosthesis so

St Andrews, University of

446

Efficient Small Area Estimation in the Presence of Measurement Error in Covariates  

E-Print Network [OSTI]

for the four estimators, yi, eYiS, bYiME, bYiSIMEX when the number of small areas is 100, measure- ment error variance Ci = 3 and 2v = 4. k is the percentage of areas having auxiliary information measured with error. : : : : : : : 52 2 Absolute value... 3 Jackknife estimates of the mean squared error of the Lohr-Ybarra estimator bYiME and the SIMEX estimator bYiSIMEX when the num- ber of small areas is 100, measurement error variance Ci = 2 and 2v = 4. k is the percentage of areas having...

Singh, Trijya

2012-10-19T23:59:59.000Z

447

SPPI ORIGINAL PAPER October 11, 2011 GROSS ERRORS IN THE IPCC-AR4  

E-Print Network [OSTI]

SPPI ORIGINAL PAPER October 11, 2011 GROSS ERRORS IN THE IPCC-AR4 REPORT REGARDING PAST & FUTURE FIGURE AND GEORGE WILL QUOTE.....................

Gray, William

448

Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar  

DOE Patents [OSTI]

Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)

2008-06-24T23:59:59.000Z

449

Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley National Laboratory  

E-Print Network [OSTI]

Quasi-Monoenergetic Neutron Beam from Deuteron Breakup”, inexperiments of atmospheric neutron effects on deep sub-Neutron Soft Errors in Xilinx FPGAs at Lawrence Berkeley

George, Jeffrey S.

2008-01-01T23:59:59.000Z

450

Abstract Error Groups Via Jones Unitary Braid Group Representations at q=i  

E-Print Network [OSTI]

In this paper, we classify a type of abstract groups by the central products of dihedral groups and quaternion groups. We recognize them as abstract error groups which are often not isomorphic to the Pauli groups in the literature. We show the corresponding nice error bases equivalent to the Pauli error bases modulo phase factors. The extension of these abstract groups by the symmetric group are finite images of the Jones unitary representations (or modulo a phase factor) of the braid group at q=i or r=4. We hope this work can finally lead to new families of quantum error correction codes via the representation theory of the braid group.

Yong Zhang

2009-02-02T23:59:59.000Z

451

Numerical study of the effect of normalised window size, sampling frequency, and noise level on short time Fourier transform analysis  

SciTech Connect (OSTI)

Photonic Doppler velocimetry, also known as heterodyne velocimetry, is a widely used optical technique that requires the analysis of frequency modulated signals. This paper describes an investigation into the errors of short time Fourier transform analysis. The number of variables requiring investigation was reduced by means of an equivalence principle. Error predictions, as the number of cycles, samples per cycle, noise level, and window type were varied, are presented. The results were found to be in good agreement with analytical models.

Ota, T. A. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom)] [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom)

2013-10-15T23:59:59.000Z

452

Viscous sludge sample collector  

DOE Patents [OSTI]

A vertical core sample collection system for viscous sludge. A sample tube's upper end has a flange and is attached to a piston. The tube and piston are located in the upper end of a bore in a housing. The bore's lower end leads outside the housing and has an inwardly extending rim. Compressed gas, from a storage cylinder, is quickly introduced into the bore's upper end to rapidly accelerate the piston and tube down the bore. The lower end of the tube has a high sludge entering velocity to obtain a full-length sludge sample without disturbing strata detail. The tube's downward motion is stopped when its upper end flange impacts against the bore's lower end inwardly extending rim.

Beitel, George A [Richland, WA

1983-01-01T23:59:59.000Z

453

A High-Performance Embedded Hybrid Methodology for Uncertainty Quantification With Applications  

SciTech Connect (OSTI)

Multiphysics processes modeled by a system of unsteady di#11;erential equations are natu- rally suited for partitioned (modular) solution strategies. We consider such a model where probabilistic uncertainties are present in each module of the system and represented as a set of random input parameters. A straightforward approach in quantifying uncertainties in the predicted solution would be to sample all the input parameters into a single set, and treat the full system as a black-box. Although this method is easily parallelizable and requires minimal modi#12;cations to deterministic solver, it is blind to the modular structure of the underlying multiphysical model. On the other hand, using spectral representations polynomial chaos expansions (PCE) can provide richer structural information regarding the dynamics of these uncertainties as they propagate from the inputs to the predicted output, but can be prohibitively expensive to implement in the high-dimensional global space of un- certain parameters. Therefore, we investigated hybrid methodologies wherein each module has the exibility of using sampling or PCE based methods of capturing local uncertainties while maintaining accuracy in the global uncertainty analysis. For the latter case, we use a conditional PCE model which mitigates the curse of dimension associated with intru- sive Galerkin or semi-intrusive Pseudospectral methods. After formalizing the theoretical framework, we demonstrate our proposed method using a numerical viscous ow simulation and benchmark the performance against a solely Monte-Carlo method and solely spectral method.

Iaccarino, Gianluca

2014-04-01T23:59:59.000Z

454

Determining the Bayesian optimal sampling strategy in a hierarchical system.  

SciTech Connect (OSTI)

Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

2010-09-01T23:59:59.000Z

455

A Methodology to Assess the Value of Integrated Hydropower and Wind Generation  

E-Print Network [OSTI]

A Methodology to Assess the Value of Integrated Hydropower and Wind Generation by Mitch A. Clement entitled: A Methodology to Assess the Value of Integrated Hydropower and Wind Generation written by Mitch A) A Methodology to Assess the Value of Integrated Hydropower and Wind Generation Thesis directed by Professor

456

Application of life cycle assessment methodology at Ontario Hydro  

SciTech Connect (OSTI)

Ontario Hydro is an electrical utility located in Ontario, Canada. In 1995, Ontario Hydro adopted Sustainable Energy Development Policy and Principles that include the governing principle: {open_quotes}Ontario Hydro will integrate environmental and social factors into its planning, decision-making, and business practices.{close_quotes} Life cycle assessment was identified as a useful tool for evaluating environmental impacts of products and processes in support of decision-making. Ontario Hydro has developed a methodology for life cycle assessment (LCA) that is consistent with generally accepted practices, practical, and suitable for application in Ontario Hydro Business Units. The methodology is based on that developed by the Society of Environmental Toxicology and Chemistry (SETAC) but follows a pragmatic and somewhat simplified approach. In scoping an LCA, the breadth and depth of analysis are compatible with and sufficient to address the stated goal of the study. The depth of analysis is tied to (i) the dollar value of the commodity, process or activity being assessed, (ii) the degree of freedom available to the assessor to make meaningful choices among options, and (iii) the importance of the environmental or technological issues leading to the evaluation. A pilot study was completed to apply the methodology to an LCA of the light vehicle fleet (cars, vans and light pick-up trucks) at Ontario Hydro. The objective of the LCA was to compare the life cycle impacts of alternative vehicle fuel cycles: gasoline, diesel, natural gas, propane, and alcohol; with particular focus on life cycle emissions, efficiency and cost. The study concluded that for large vehicles (1/2 ton and 3/4 ton) that travel more than 35000 km/year, natural gas and propane fuelling offer both cost reduction and emissions reduction when compared to gasoline vehicles.

Reuber, B.; Khan, A. [Ontario Hydro, Ontario (Canada)

1996-12-31T23:59:59.000Z

457

A methodology for evaluating ``new`` technologies in nuclear power plants  

SciTech Connect (OSTI)

As obsolescence and spare parts issues drive nuclear power plants to upgrade with new technology (such as optical fiber communication systems), the ability of the new technology to withstand stressors present where it is installed needs to be determined. In particular, new standards may be required to address qualification criteria and their application to the nuclear power plants of tomorrow. This paper discusses the failure modes and age-related degradation mechanisms of fiber optic communication systems, and suggests a methodology for identifying when accelerated aging should be performed during qualification testing.

Korsah, K.; Clark, R.L.; Holcomb, D.E.

1994-06-01T23:59:59.000Z

458

Photovoltaic-system costing-methodology development. Final report  

SciTech Connect (OSTI)

Presented are the results of a study to expand the use of standardized costing methodologies in the National Photovoltaics Program. The costing standards, which include SAMIS for manufacturing costs and M and D for marketing and distribution costs, have been applied to concentrator collectors and power-conditioning units. The M and D model was also computerized. Finally, a uniform construction cost-accounting structure was developed for use in photovoltaic test and application projects. The appendices contain example cases which demonstrate the use of the models.

Not Available

1982-07-01T23:59:59.000Z

459

Methodology and Analysis Monthly Natural Gas Gross Production Report  

Annual Energy Outlook 2013 [U.S. Energy Information Administration (EIA)]

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary) " ,"ClickPipelines AboutDecember 2005 (Thousand9,0,Information Administration390 2.387 2.372Methodology and

460

Methodology for EIA Weekly Underground Natural Gas Storage Estimates  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary)morphinanInformation Desert Southwest Regionat Cornell Batteries & Fuel Cells InDioxide CaptureMethodology

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

Methodology to Quantify Non-Program Savings Update  

Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmospheric Optical Depth7-1D: VegetationEquipment Surfaces andMapping theEnergyInnovation Portal BiomassUsingDataMethodology to

462

Surface photometry of a sample of elliptical and S0 galaxies  

SciTech Connect (OSTI)

The results are reported of surface photometry of 38 early-type galaxies, located mainly in the Fornax Cluster. Detailed comparisons with previously published work are given along with internal and external error estimates for all quantities, and some serious systematic discrepancies in the older aperture photometry of some of the galaxies in the present sample are pointed out. 15 refs.

De carvalho, R.R.; Da costa, L.N.; Djorgovski, S. (Observatorio Nacional do Brasil, Sao Cristovao (Brazil) California Institute of Technology, Pasadena (United States))

1991-08-01T23:59:59.000Z

463

EMERGING MODALITIES FOR SOIL CARBON ANALYSIS: SAMPLING STATISTICS AND ECONOMICS WORKSHOP.  

SciTech Connect (OSTI)

The workshop's main objectives are (1) to present the emerging modalities for analyzing carbon in soil, (2) to assess their error propagation, (3) to recommend new protocols and sampling strategies for the new instrumentation, and, (4) to compare the costs of the new methods with traditional chemical ones.

WIELOPOLSKI, L.

2006-04-01T23:59:59.000Z

464

Cyclone aerosol sampling and particle deposition in tubing elements following elbow bends  

E-Print Network [OSTI]

on Deposition. Influence of an Elbow Bend on Straight Tube Deposition. . . Discussion of Errors. . . . . . . . . . . . . . 27 3l 35 SUMMARY AND CONCLUSIONS. 36 Ambient Air Sampling Aerosol Transport . 36 37 FUTI JRE WORK 38 Ambient Air Sampling... Cunningham's correction factor for a particle reference particle concentration concetration of the sodium fluoroscein collected at the inlet to the system cutpoint diameter aerodynamic equivalent diameter cyclone body diameter cyclone outlet tube...

Wente, William Baker

1995-01-01T23:59:59.000Z

465

Environmental Science: Sample Pathway  

E-Print Network [OSTI]

Environmental Science: Sample Pathway Semester I Semester II Freshman Year CGS Core CGS Core GE 100 Intro to Env Science ES 105 Env Earth Science Sophomore Year CGS Core (CGS NS201 will fulfill CAS BI107 & 124) MA 115 Statistics Summer Environmental Internship Junior Year CH 171 Chem for Health Sciences CH

Goldberg, Bennett

466

ASSESSMENT OF SEISMIC ANALYSIS METHODOLOGIES FOR DEEPLY EMBEDDED NPP STRUCTURES.  

SciTech Connect (OSTI)

Several of the new generation nuclear power plant designs have structural configurations which are proposed to be deeply embedded. Since current seismic analysis methodologies have been applied to shallow embedded structures (e.g., ASCE 4 suggest that simple formulations may be used to model embedment effect when the depth of embedment is less than 30% of its foundation radius), the US Nuclear Regulatory Commission is sponsoring a program at the Brookhaven National Laboratory with the objective of investigating the extent to which procedures acceptable for shallow embedment depths are adequate for larger embedment depths. This paper presents the results of a study comparing the response spectra obtained from two of the more popular analysis methods for structural configurations varying from shallow embedment to complete embedment. A typical safety related structure embedded in a soil profile representative of a typical nuclear power plant site was utilized in the study and the depths of burial (DOB) considered range from 25-100% the height of the structure. Included in the paper are: (1) the description of a simplified analysis and a detailed approach for the SSI analyses of a structure with various DOB, (2) the comparison of the analysis results for the different DOBs between the two methods, and (3) the performance assessment of the analysis methodologies for SSI analyses of deeply embedded structures. The resulting assessment from this study has indicated that simplified methods may be capable of capturing the seismic response for much deeper embedded structures than would be normally allowed by the standard practice.

XU, J.; MILLER, C.; COSTANTINO, C.; HOFMAYER, C. (BNL); GRAVES, H. (US NRC).

2005-07-01T23:59:59.000Z

467

Methodology Using MELCOR Code to Model Proposed Hazard Scenario  

SciTech Connect (OSTI)

This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

Gavin Hawkley

2010-07-01T23:59:59.000Z

468

Renewable Energy Assessment Methodology for Japanese OCONUS Army Installations  

SciTech Connect (OSTI)

Since 2005, Pacific Northwest National Laboratory (PNNL) has been asked by Installation Management Command (IMCOM) to conduct strategic assessments at selected US Army installations of the potential use of renewable energy resources, including solar, wind, geothermal, biomass, waste, and ground source heat pumps (GSHPs). IMCOM has the same economic, security, and legal drivers to develop alternative, renewable energy resources overseas as it has for installations located in the US. The approach for continental US (CONUS) studies has been to use known, US-based renewable resource characterizations and information sources coupled with local, site-specific sources and interviews. However, the extent to which this sort of data might be available for outside the continental US (OCONUS) sites was unknown. An assessment at Camp Zama, Japan was completed as a trial to test the applicability of the CONUS methodology at OCONUS installations. It was found that, with some help from Camp Zama personnel in translating and locating a few Japanese sources, there was relatively little difficulty in finding sources that should provide a solid basis for conducting an assessment of comparable depth to those conducted for US installations. Project implementation will likely be more of a challenge, but the feasibility analysis will be able to use the same basic steps, with some adjusted inputs, as PNNL’s established renewable resource assessment methodology.

Solana, Amy E.; Horner, Jacob A.; Russo, Bryan J.; Gorrissen, Willy J.; Kora, Angela R.; Weimar, Mark R.; Hand, James R.; Orrell, Alice C.; Williamson, Jennifer L.

2010-08-30T23:59:59.000Z

469

PREPARED TESTIMONY OF ROBERT B. WEISENMILLER, PH.D. Qualifying Facilities: Resource Planning and Avoided Costs Methodology ................................ 1  

E-Print Network [OSTI]

Planning and Avoided Costs Methodology ................................ 1 Energy and Capacity Payments............................................................................. 15 Qualifying Facilities: Resource Planning and Avoided Costs Methodology 1. CPUC Order Instituting Testimony on Long Run Avoided Cost Methodology for the California Manufacturers Association, Department

470

Decoupled Sampling for Graphics Pipelines  

E-Print Network [OSTI]

We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at ...

Ragan-Kelley, Jonathan Millar

471

Fluid sampling apparatus and method  

DOE Patents [OSTI]

Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis. 3 figs.

Yeamans, D.R.

1998-02-03T23:59:59.000Z

472

Sample introducing apparatus and sample modules for mass spectrometer  

DOE Patents [OSTI]

An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus.

Thompson, Cyril V. (Knoxville, TN); Wise, Marcus B. (Kingston, TN)

1993-01-01T23:59:59.000Z

473

Sample introducing apparatus and sample modules for mass spectrometer  

DOE Patents [OSTI]

An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus. 5 figures.

Thompson, C.V.; Wise, M.B.

1993-12-21T23:59:59.000Z

474

Soil sampling kit and a method of sampling therewith  

DOE Patents [OSTI]

A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.

Thompson, Cyril V. (Knoxville, TN)

1991-01-01T23:59:59.000Z

475

Soil sampling kit and a method of sampling therewith  

DOE Patents [OSTI]

A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allows an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds. 11 figures.

Thompson, C.V.

1991-02-05T23:59:59.000Z

476

DysList: An Annotated Resource of Dyslexic Errors Luz Rello,1  

E-Print Network [OSTI]

of texts written by people with dyslexia. Each of the errors was annotated with a set of characteristics of this kind, especially given the difficulty of finding texts written by people with dyslexia. Keywords: Errors, Dyslexia, Visual, Phonetics, Resource 1. Introduction Dyslexia is a reading and spelling disorder

477

Simple Loran Cycle Error Detection Algorithms for Maritime Harbor Entrance Approach  

E-Print Network [OSTI]

Simple Loran Cycle Error Detection Algorithms for Maritime Harbor Entrance Approach Operations cycle. This paper details and examines some of the algorithms being developed and analyzed by SC127. SC 127 is developing simplified eLoran cycle error detection algorithms for the eLoran HEA MPS. Correct

Stanford University

478

State preservation by repetitive error detection in a superconducting quantum circuit  

E-Print Network [OSTI]

Quantum computing becomes viable when a quantum state can be preserved from environmentally-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation by guaranteeing increasingly larger clusters of errors will not cause logical failure - a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here, we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural precursor of the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition (QND) parity measurements. Relative to a single physical qubit, we reduce the failure rate in retrieving an input state by a factor of 2.7 for five qubits and a factor of 8.5 for nine qubits after eight cycles. Additionally, we tomographically verify preservation of the non-classical Greenberger-Horne-Zeilinger (GHZ) state. The successful suppression of environmentally-induced errors strongly motivates further research into the many exciting challenges associated with building a large-scale superconducting quantum computer.

J. Kelly; R. Barends; A. G. Fowler; A. Megrant; E. Jeffrey; T. C. White; D. Sank; J. Y. Mutus; B. Campbell; Yu Chen; Z. Chen; B. Chiaro; A. Dunsworth; I. -C. Hoi; C. Neill; P. J. J. O'Malley; C. Quintana; P. Roushan; A. Vainsencher; J. Wenner; A. N. Cleland; John M. Martinis

2014-11-26T23:59:59.000Z

479

Distributed Forcing of Forecast and Assimilation Error Systems BRIAN F. FARRELL  

E-Print Network [OSTI]

Distributed Forcing of Forecast and Assimilation Error Systems BRIAN F. FARRELL Division forecast system gov- erning forecast error growth and the tangent linear observer system governing deterministic and stochastic forcings of the forecast and observer systems over a chosen time interval

Farrell, Brian F.

480

Evaluation of servo, geometric and dynamic error sources on five axis high-speed machine tool  

E-Print Network [OSTI]

Many sources of errors exist in the manufacturing process of complex shapes. Some approximations occur at each step from the design geometry to the machined part. The aim of the paper is to present a method to evaluate the effect of high speed and high dynamic load on volumetric errors at the tool center point. The interpolator output signals and the machine encoder signals are recorded and compared to evaluate the contouring errors resulting from each axis follow-up error. The machine encoder signals are also compared to the actual tool center point position as recorded with a non-contact measuring instrument called CapBall to evaluate the total geometric errors. The novelty of the work lies in the method that is proposed to decompose the geometric errors in two categories: the quasi-static geometric errors independent from the speed of the trajectory and the dynamic geometric errors, dependent on the programmed feed rate and resulting from the machine structure deflection during the acceleration of its axes...

Andolfatto, Loďc; Mayer, René

2011-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "methodology sampling error" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


481

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with  

E-Print Network [OSTI]

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU on the 4D- Var data assimilation, twin experiments were carried out with the dy- namical core of the new is also investigated. Keywords: Data assimilation, incomplete observations, background error. 1

Navon, Michael

482

Adjoint goal-based error norms for adaptive mesh ocean modelling  

E-Print Network [OSTI]

Adjoint goal-based error norms for adaptive mesh ocean modelling P.W. Power a , M.D. Piggott a,*, F. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate

Navon, Michael

483

Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications  

E-Print Network [OSTI]

Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, SensorÂŁ Abstract Background modeling and subtraction to detect new or moving objects in a scene is an important a detailed analysis of such errors. Then, we propose a sensor configuration that eliminates false de

Paragios, Nikos

484

A Posteriori Error Estimates with Post-Processing for Nonconforming Finite Elements  

E-Print Network [OSTI]

that it has the same asymptotic behavior as the energy norm of the real discretization error itself. We show, we propose an a posteriori error estimate in the energy norm which uses as an additive term the \\post in the global energy norm, we demonstrate that the concept of using a conforming approximation

Schieweck, Friedhelm

485

A Case for Soft Error Detection and Correction in Computational Chemistry  

SciTech Connect (OSTI)

High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.

2013-09-10T23:59:59.000Z

486

Goal-oriented error estimation for reduced basis method, with application to certified sensitivity  

E-Print Network [OSTI]

of interest computed using the reduced model is tainted by a reduction error. We present a new, efficiently- tions (PDEs). These models require input data (e.g., the physical features of the considered system papers showed that using an adapted basis could lead to a great improvement of reduction error

Boyer, Edmond

487

Systematic Errors in Future Weak Lensing Surveys: Requirements and Prospects for Self-Calibration  

E-Print Network [OSTI]

We study the impact of systematic errors on planned weak lensing surveys and compute the requirements on their contributions so that they are not a dominant source of the cosmological parameter error budget. The generic types of error we consider are multiplicative and additive errors in measurements of shear, as well as photometric redshift errors. In general, more powerful surveys have stronger systematic requirements. For example, for a SNAP-type survey the multiplicative error in shear needs to be smaller than 1%(fsky/0.025)^{-1/2} of the mean shear in any given redshift bin, while the centroids of photometric redshift bins need to be known to better than 0.003(fsky/0.025)^{-1/2}. With about a factor of two degradation in cosmological parameter errors, future surveys can enter a self-calibration regime, where the mean systematic biases are self-consistently determined from the survey and only higher-order moments of the systematics contribute. Interestingly, once the power spectrum measurements are combined with the bispectrum, the self-calibration regime in the variation of the equation of state of dark energy w_a is attained with only a 20-30% error degradation.

Dragan Huterer; Masahiro Takada; Gary Bernstein; Bhuvnesh Jain

2005-06-02T23:59:59.000Z

488

Error Rate Performance of Coded Free-Space Optical Links over Gamma-Gamma Turbulence Channels  

E-Print Network [OSTI]

Error Rate Performance of Coded Free-Space Optical Links over Gamma-Gamma Turbulence Channels Murat be used over free-space optical (FSO) links to mitigate turbulence-induced fading. In this paper, we channels, considering the recently introduced gamma-gamma turbulence model. We derive a pairwise error

Li, Tiffany Jing

489

Dimension Augmentation and Combinatorial Criteria for Efficient Error-resistant DNA Self-assembly  

E-Print Network [OSTI]

Dimension Augmentation and Combinatorial Criteria for Efficient Error-resistant DNA Self-assembly Abstract DNA self-assembly has emerged as a rich and promising primitive for nano-technology. Experimental-correction mech- anisms have been proposed for the tile model of self- assembly. These error-correction mechanisms

Goel, Ashish

490

Priority-Based Broadcasting of Sensitive Data in Error-Prone Wireless Networks  

E-Print Network [OSTI]

in its Binary-Coded Decimal (BCD) representation. In BCD, each decimal digit is represented as a 4 bits that benefits from network coding. We also consider the case of burst errors and discuss how can we make our--Symbol-level coding, broadcasting, reliability, burst error, random linear network coding, priority, wireless networks

Wu, Jie

491

Integrated Control-Path Design and Error Recovery in the Synthesis of Digital  

E-Print Network [OSTI]

11 Integrated Control-Path Design and Error Recovery in the Synthesis of Digital Microfluidic Lab-on-Chip YANG ZHAO, TAO XU, and KRISHNENDU CHAKRABARTY Duke University Recent advances in digital microfluidics that incorporates control paths and an error- recovery mechanism in the design of a digital microfluidic lab

Chakrabarty, Krishnendu

492

Error Modeling in the ACT-R Production System Christian Lebire  

E-Print Network [OSTI]

Error Modeling in the ACT-R Production System Christian Lebière Department of Psychology Carnegie to extend the ACT-R production system to model human errors in the performance of a high-level cognitive be successfully duplicated in production system models. Introduction ACT-R (Anderson, 1993) is a model of human

Reder, Lynne

493

Threshold analysis with fault-tolerant operations for nonbinary quantum error correcting codes  

E-Print Network [OSTI]

Quantum error correcting codes have been introduced to encode the data bits in extra redundant bits in order to accommodate errors and correct them. However, due to the delicate nature of the quantum states or faulty gate operations, there is a...

Kanungo, Aparna

2005-11-01T23:59:59.000Z

494

The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors  

SciTech Connect (OSTI)

Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum failure rate, and can be utilized for probabilistic risk analysis purposes. (authors)

Duffey, Romney B. [Atomic Energy of Canada, Ltd., 2251 Speakman Drive, Mississauga, ON, L5K 1B2 (Canada); Saull, John W. [International Federation of Airwothiness, 14 Railway Approach, East Grinstead, West Sussex, RH19 1BP (United Kingdom)

2006-07-01T23:59:59.000Z

495

Sample holder with optical features  

DOE Patents [OSTI]

A sample holder for holding a sample to be observed for research purposes, particularly in a transmission electron microscope (TEM), generally includes an external alignment part for directing a light beam in a predetermined beam direction, a sample holder body in optical communication with the external alignment part and a sample support member disposed at a distal end of the sample holder body opposite the external alignment part for holding a sample to be analyzed. The sample holder body defines an internal conduit for the light beam and the sample support member includes a light beam positioner for directing the light beam between the sample holder body and the sample held by the sample support member.

Milas, Mirko; Zhu, Yimei; Rameau, Jonathan David

2013-07-30T23:59:59.000Z

496

A total risk assessment methodology for security assessment.  

SciTech Connect (OSTI)

Sandia National Laboratories performed a two-year Laboratory Directed Research and Development project to develop a new collaborative risk assessment method to enable decision makers to fully consider the interrelationships between threat, vulnerability, and consequence. A five-step Total Risk Assessment Methodology was developed to enable interdisciplinary collaborative risk assessment by experts from these disciplines. The objective of this process is promote effective risk management by enabling analysts to identify scenarios that are simultaneously achievable by an adversary, desirable to the adversary, and of concern to the system owner or to society. The basic steps are risk identification, collaborative scenario refinement and evaluation, scenario cohort identification and risk ranking, threat chain mitigation analysis, and residual risk assessment. The method is highly iterative, especially with regard to scenario refinement and evaluation. The Total Risk Assessment Methodology includes objective consideration of relative attack likelihood instead of subjective expert judgment. The 'probability of attack' is not computed, but the relative likelihood for each scenario is assessed through identifying and analyzing scenario cohort groups, which are groups of scenarios with comparable qualities to the scenario being analyzed at both this and other targets. Scenarios for the target under consideration and other targets are placed into cohort groups under an established ranking process that reflects the following three factors: known targeting, achievable consequences, and the resources required for an adversary to have a high likelihood of success. The development of these target cohort groups implements, mathematically, the idea that adversaries are actively choosing among possible attack scenarios and avoiding scenarios that would be significantly suboptimal to their objectives. An adversary who can choose among only a few comparable targets and scenarios (a small comparable target cohort group) is more likely to choose to attack the specific target under analysis because he perceives it to be a relatively unique attack opportunity. The opposite is also true. Thus, total risk is related to the number of targets that exist in each scenario cohort group. This paper describes the Total Risk Assessment Methodology and illustrates it through an example.

Aguilar, Richard; Pless, Daniel J.; Kaplan, Paul Garry; Silva, Consuelo Juanita; Rhea, Ronald Edward; Wyss, Gregory Dane; Conrad, Stephen Hamilton

2009-06-01T23:59:59.000Z

497

Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies  

SciTech Connect (OSTI)

The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from weaning the U.S. from energy imports (e.g., measures of energy self-sufficiency), and minimization of future high level waste (HLW) repositories world-wide.

David E. Shropshire

2009-05-01T23:59:59.000Z

498

Development of risk assessment methodology for municipal sludge incineration  

SciTech Connect (OSTI)

This is one of a series of reports that present methodologies for assessing the potential risks to humans or other organisms from the disposal or reuse of municipal sludge. The sludge management practices addressed by the series include land application practices, distribution and marketing programs, landfilling, surface disposal, incineration and ocean disposal. In particular, these reports provide methods for evaluating potential health and environmental risks from toxic chemicals that may be present in sludge. The document addresses risks from chemicals associated with incineration of municipal sludge. These proposed risk assessment procedures are designed as tools to assist in the development of regulations for sludge management practices. The procedures are structured to allow calculation of technical criteria for sludge disposal/reuse options based on the potential for adverse health or environmental impacts. The criteria may address management practices (such as site design or process control specifications), limits on sludge disposal rates or limits on toxic chemical concentrations in the sludge.

Not Available

1990-10-01T23:59:59.000Z

499

Aggregate Building Simulator (ABS) Methodology Development, Application, and User Manual  

SciTech Connect (OSTI)

As the relationship between the national building stock and various global energy issues becomes a greater concern, it has been deemed necessary to develop a system of predicting the energy consumption of large groups of buildings. Ideally this system is to take advantage of the most advanced energy simulation software available, be able to execute runs quickly, and provide concise and useful results at a level of detail that meets the users needs without inundating them with data. The resulting methodology that was developed allows the user to quickly develop and execute energy simulations of many buildings simultaneously, taking advantage of parallel processing to greatly reduce total simulation times. The result of these simulations can then be rapidly condensed and presented in a useful and intuitive manner.

Dirks, James A.; Gorrissen, Willy J.

2011-11-30T23:59:59.000Z

500

Sample Environment Plans and Progress  

E-Print Network [OSTI]

Sample Environment Plans and Progress at the SNS & HFIR SNS HFIR User Group Meeting American Conference on Neutron Scattering Ottawa, Canada June 26 ­ 30, 2010 Lou Santodonato Sample Environment Group our sample environment capabilities Feedback SHUG meetings User surveys Sample Environment Steering

Pennycook, Steve