National Library of Energy BETA

Sample records for methodology sampling error

  1. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect (OSTI)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted

  2. Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Mesurements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Measurements Z. Li, M. C. Cribb, and F.-L. Chang Earth System Science Interdisciplinary Center University of Maryland College Park, Maryland A. P. Trishchenko and Y. Luo Canada Centre for Remote Sensing Ottawa, Ontario, Canada Introduction Radiation measurements have been widely employed for evaluating cloud parameterization schemes and model simulation results. As the most comprehensive program aiming to improve cloud

  3. Real-time quadrupole mass spectrometer analysis of gas in boreholefluid samples acquired using the U-Tube sampling methodology

    SciTech Connect (OSTI)

    Freifeld, Barry M.; Trautz, Robert C.

    2006-01-11

    Sampling of fluids in deep boreholes is challenging becauseof the necessity to minimize external contamination and maintain sampleintegrity during recovery. The U-tube sampling methodology was developedto collect large volume, multiphase samples at in situ pressures. As apermanent or semi-permanent installation, the U-tube can be used forrapidly acquiring multiple samples or it may be installed for long-termmonitoring applications. The U-tube was first deployed in Liberty County,TX to monitor crosswell CO2 injection as part of the Frio CO2sequestration experiment. Analysis of gases (dissolved or separate phase)was performed in the field using a quadrupole mass spectrometer, whichserved as the basis for determining the arrival of the CO2 plume. Thepresence of oxygen and argon in elevated concentrations, along withreduced methane concentration, indicate sample alteration caused by theintroduction of surface fluids during borehole completion. Despiteproducing the well to eliminate non-native fluids, measurementsdemonstrate that contamination persists until the immiscible CO2injection swept formation fluid into the observationwellbore.

  4. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    SciTech Connect (OSTI)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Gƶtze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-02-15

    Highlights: ā€¢ Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. ā€¢ Food and miscellaneous wastes are the main fractions contributing to the residual household waste. ā€¢ Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10ā€“50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at ā€œLevel IIIā€, e.g. detailed, while the two others were sorted only at ā€œLevel Iā€). The results showed that residual household waste mainly contained food waste (42 Ā± 5%, mass per wet basis) and miscellaneous combustibles (18 Ā± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3ā€“4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  5. Errors of Nonobservation

    U.S. Energy Information Administration (EIA) Indexed Site

    Errors of Nonobservation Finally, several potential sources of nonsampling error and bias result from errors of nonobservation. The 1994 MECS represents, in terms of sampling...

  6. DEVELOPMENT OF METHODOLOGY AND FIELD DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE

    SciTech Connect (OSTI)

    Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

    2012-06-04

    This project developed methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of the fuel storage medium and determine the oxide thickness on the spent fuel basin materials. The overall objective of this project was to determine the amount of time fuel has spent in a storage basin to determine if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations. This project developed and validated forensic tools that can be used to predict the age and condition of spent nuclear fuels stored in liquid basins based on key physical, chemical and microbiological basin characteristics. Key parameters were identified based on a literature review, the parameters were used to design test cells for corrosion analyses, tools were purchased to analyze the key parameters, and these were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The key parameters identified in the literature review included chloride concentration, conductivity, and total organic carbon level. Focus was also placed on aluminum based cladding because of their application to weapons production. The literature review was helpful in identifying important parameters, but relationships between these parameters and corrosion rates were not available. Bench scale test systems were designed, operated, harvested, and analyzed to determine corrosion relationships between water parameters and water conditions, chemistry and microbiological conditions. The data from the bench scale system indicated that corrosion rates were dependent on total organic carbon levels and chloride concentrations. The highest corrosion rates were observed in test cells amended with sediment, a large microbial inoculum and an organic carbon source. A complete characterization test kit was field tested to characterize the SRS L-Area spent fuel basin. The sampling kit consisted of a TOC analyzer, a YSI

  7. Tuning the narrow-band beam position monitor sampling clock to remove the aliasing errors in APS storage ring orbit measurements.

    SciTech Connect (OSTI)

    Sun, X.; Singh, O. )

    2007-01-01

    The Advanced Photon Source storage ring employs a real-time orbit correction system to reduce orbit motion up to 50 Hz. This system uses up to 142 narrow-band rf beam position monitors (Nbbpms) in a correction algorithm by sampling at a frequency of 1.53 kHz. Several Nbbpms exhibit aliasing errors in orbit measurements, rendering these Nbbpms unusable in real-time orbit feedback. The aliasing errors are caused by beating effects of the internal sampling clocks with various other processing clocks residing within the BPM electronics. A programmable external clock has been employed to move the aliasing errors out of the active frequency band of the real-time feedback system (RTFB) and rms beam motion calculation. This paper discusses the process of tuning and provides test results.

  8. Error abstractions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Error and fault abstractions Mattan Erez UT Austin *Who should care about faults and errors? *Ideally, only system cares about masked faults? - Assuming application bugs are not...

  9. Error detection method

    DOE Patents [OSTI]

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  10. Revenue Requirements Modeling System (RRMS) documentation. Volume I. Methodology description and user's guide. Appendix A: model abstract; Appendix B: technical appendix; Appendix C: sample input and output. [Compustat

    SciTech Connect (OSTI)

    Not Available

    1986-03-01

    The Revenue Requirements Modeling System (RRMS) is a utility specific financial modeling system used by the Energy Information Administration (EIA) to evaluate the impact on electric utilities of changes in the regulatory, economic, and tax environments. Included in the RRMS is a power plant life-cycle revenue requirements model designed to assess the comparative economic advantage of alternative generating plant. This report is Volume I of a 2-volume set and provides a methodology description and user's guide, a model abstract and technical appendix, and sample input and output for the models. Volume II provides an operator's manual and a program maintenance guide.

  11. EIA - Sorry! Unexpected Error

    U.S. Energy Information Administration (EIA) Indexed Site

    Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate...

  12. EIA - Sorry! Unexpected Error

    Gasoline and Diesel Fuel Update (EIA)

    Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate ...

  13. Error Page

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    script writes out the header html. We are sorry to report that an error has occurred. Internal identifier for doc type not found. Return to RevCom | Return to Web Portal Need help? Email Technical Support. This site managed by the Office of Management / US Department of Energy Directives | Regulations | Technical Standards | Reference Library | DOE Forms | About Us | Privacy & Security Notice This script breaks up the email address to avoid spam

  14. GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2

    SciTech Connect (OSTI)

    Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.

    2009-01-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  15. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. nemgnierrorhandler(): a transaction error ...

  16. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    SciTech Connect (OSTI)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  17. Integrated fiducial sample mount and software for correlated microscopy

    SciTech Connect (OSTI)

    Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge; Karen E. Wright

    2014-02-01

    A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mount methodology allows data to be efficiently and easily collected on different instruments for the same sample location.

  18. runtime error message: "readControlMsg: System returned error...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd"...

  19. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. NERSC and Cray are working on this issue. ...

  20. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    not be a problem. Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. Last edited: 2015-01-16 ...

  1. Modular error embedding

    DOE Patents [OSTI]

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  2. DOE Challenge Home Label Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    October 2012 1 Label Methodology DOE Challenge Home Label Methodology October 2012 DOE Challenge Home October 2012 2 Label Methodology Contents Background ............................................................................................................................................... 3 Methodology ............................................................................................................................................. 5 Comfort/Quiet

  3. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  4. Error 404 - Document not found

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or...

  5. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system resubmit the job error with width parameters to aprun user Make sure #PBS -l mppwidth value matches aprun -n value new values for

  6. Register file soft error recovery

    DOE Patents [OSTI]

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  7. 2008 ASC Methodology Errata

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BONNEVILLE POWER ADMINISTRATION'S ERRATA CORRECTIONS TO THE 2008 AVERAGE SYSTEM COST METHODOLOGY September 12, 2008 I. DESCRIPTION OF ERRATA CORRECTIONS A. Attachment A, ASC...

  8. Draft Tiered Rate Methodology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    For Regional Dialogue Discussion Purposes Only Pre-Decisional Draft Tiered Rates Methodology March 7, 2008 Pre-decisional, Deliberative, For Discussion Purposes Only March 7,...

  9. Confidence limits and their errors

    SciTech Connect (OSTI)

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  10. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect (OSTI)

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  11. Error 404 - Document not found

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or send e-mail to WWW Administrator

  12. Pressure Change Measurement Leak Testing Errors

    SciTech Connect (OSTI)

    Pryor, Jeff M; Walker, William C

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  13. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system One of the compute nodes assigned to the job failed. Resubmit the job PtlNIInit failed : PTL_NOT_REGISTERED user The executable is from

  14. error | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    error Sorry, there is no www.netl.doe.gov web page that matches your request. It may be possible that you typed the address incorrectly. Connect to National Energy Technology...

  15. Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity

    Broader source: Energy.gov [DOE]

    Discussed development of a methodology for creating accurate soot models for soot samples from various origins with minimal characterization

  16. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. ErrormoreĀ Ā» rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.Ā«Ā less

  17. Regional Shelter Analysis Methodology

    SciTech Connect (OSTI)

    Dillon, Michael B.; Dennison, Deborah; Kane, Jave; Walker, Hoyt; Miller, Paul

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b) country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.

  18. Methodology of Internal Assessment of Uncertainty and Extension to Neutron Kinetics/Thermal-Hydraulics Coupled Codes

    SciTech Connect (OSTI)

    Petruzzi, A.; D'Auria, F.; Giannotti, W.; Ivanov, K.

    2005-02-15

    The best-estimate calculation results from complex system codes are affected by approximations that are unpredictable without the use of computational tools that account for the various sources of uncertainty.The code with (the capability of) internal assessment of uncertainty (CIAU) has been previously proposed by the University of Pisa to realize the integration between a qualified system code and an uncertainty methodology and to supply proper uncertainty bands each time a nuclear power plant (NPP) transient scenario is calculated. The derivation of the methodology and the results achieved by the use of CIAU are discussed to demonstrate the main features and capabilities of the method.In a joint effort between the University of Pisa and The Pennsylvania State University, the CIAU method has been recently extended to evaluate the uncertainty of coupled three-dimensional neutronics/thermal-hydraulics calculations. The result is CIAU-TN. The feasibility of the approach has been demonstrated, and sample results related to the turbine trip transient in the Peach Bottom NPP are shown. Notwithstanding that the full implementation and use of the procedure requires a database of errors not available at the moment, the results give an idea of the errors expected from the present computational tools.

  19. DOE Systems Engineering Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Engineering Methodology (SEM) Computer System Retirement Guidelines Version 3 September 2002 U.S. Department of Energy Office of the Chief Information Officer Computer System Retirement Guidelines Date: September 2002 Page 1 Rev Date: Table of Contents Section Page Purpose ............................................................................................................................................ 2 Initiation and Distribution

  20. Selecting the best defect reduction methodology

    SciTech Connect (OSTI)

    Hinckley, C.M.; Barkan, P.

    1994-04-01

    Defect rates less than 10 parts per million, unimaginable a few years ago, have become the standard of world-class quality. To reduce defects, companies are aggressively implementing various quality methodologies, such as Statistical Quality Control Motorola`s Six Sigma, or Shingo`s poka-yok. Although each quality methodology reduces defects, selection has been based on an intuitive sense without understanding their relative effectiveness in each application. A missing link in developing superior defect reduction strategies has been a lack of a general defect model that clarifies the unique focus of each method. Toward the goal of efficient defect reduction, we have developed an event tree which addresses a broad spectrum of quality factors and two defect sources, namely, error and variation. The Quality Control Tree (QCT) predictions are more consistent with production experience than obtained by the other methodologies considered independently. The QCT demonstrates that world-class defect rates cannot be achieved through focusing on a single defect source or quality control factor, a common weakness of many methodologies. We have shown that the most efficient defect reduction strategy depend on the relative strengths and weaknesses of each organization. The QCT can help each organization identify the most promising defect reduction opportunities for achieving its goals.

  1. New Methodology for Natural Gas Production Estimates

    Reports and Publications (EIA)

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  2. Analysis Methodologies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Analysis Ā» Analysis Methodologies Analysis Methodologies A spectrum of analysis methodologies are used in combination to provide a sound understanding of hydrogen and fuel cell systems and developing markets, as follows: Resource Analysis Technological Feasibility and Cost Analysis Environmental Analysis Delivery Analysis Infrastructure Development and Financial Analysis Energy Market Analysis In general, each methodology builds on previous efforts to quantify the benefits, drawbacks,

  3. Methodologies for Reservoir Characterization Using Fluid Inclusion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for ...

  4. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymoreĀ Ā» the photometric redshifts with z ā‰¤ 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs ā€“ zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.Ā«Ā less

  5. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect (OSTI)

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  6. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-03-01

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  7. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-01-01

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  8. Error and uncertainty in Raman thermal conductivity measurements

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of Ā±15% are achievable for most materialsmoreĀ Ā» under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.Ā«Ā less

  9. The role of variation, error, and complexity in manufacturing defects

    SciTech Connect (OSTI)

    Hinckley, C.M.; Barkan, P.

    1994-03-01

    Variation in component properties and dimensions is a widely recognized factor in product defects which can be quantified and controlled by Statistical Process Control methodologies. Our studies have shown, however, that traditional statistical methods are ineffective in characterizing and controlling defects caused by error. The distinction between error and variation becomes increasingly important as the target defect rates approach extremely low values. Motorola data substantiates our thesis that defect rates in the range of several parts per million can only be achieved when traditional methods for controlling variation are combined with methods that specifically focus on eliminating defects due to error. Complexity in the product design, manufacturing processes, or assembly increases the likelihood of defects due to both variation and error. Thus complexity is also a root cause of defects. Until now, the absence of a sound correlation between defects and complexity has obscured the importance of this relationship. We have shown that assembly complexity can be quantified using Design for Assembly (DFA) analysis. High levels of correlation have been found between our complexity measures and defect data covering tens of millions of assembly operations in two widely different industries. The availability of an easily determined measure of complexity, combined with these correlations, permits rapid estimation of the relative defect rates for alternate design concepts. This should prove to be a powerful tool since it can guide design improvement at an early stage when concepts are most readily modified.

  10. Error and uncertainty in Raman thermal conductivity measurements

    SciTech Connect (OSTI)

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of Ā±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  11. Field errors in hybrid insertion devices

    SciTech Connect (OSTI)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  12. Protections: Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Protections: Sampling Protections: Sampling Protection #3: Sampling for known and unexpected contaminants August 1, 2013 Monitoring stormwater in Los Alamos Canyon Monitoring stormwater in Los Alamos Canyon The Environmental Sampling Board, a key piece of the Strategy, ensures that LANL collects relevant and appropriate data to answer questions about the protection of human and environmental health, and to satisfy regulatory requirements. LANL must demonstrate the data are technically justified

  13. Trends in Commercial Buildings--Overview

    U.S. Energy Information Administration (EIA) Indexed Site

    Buildings > Commercial Buildings Energy Consumption Survey Survey Methodology Sampling Error, Standard Errors, and Relative Standard Errors The Commercial Buildings Energy...

  14. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect (OSTI)

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  15. Clover: Compiler directed lightweight soft error resilience

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themoreĀ Ā» sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.Ā«Ā less

  16. Approximate error conjugation gradient minimization methods

    DOE Patents [OSTI]

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  17. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect (OSTI)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  18. Protections: Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Protection 3: Sampling for known and unexpected contaminants August 1, 2013 Monitoring stormwater in Los Alamos Canyon Monitoring stormwater in Los Alamos Canyon The Environmental ...

  19. Protections: Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and unexpected contaminants August 1, 2013 Monitoring stormwater in Los Alamos Canyon Monitoring stormwater in Los Alamos Canyon The Environmental Sampling Board, a key piece...

  20. Error handling strategies in multiphase inverse modeling

    SciTech Connect (OSTI)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  1. Group representations, error bases and quantum codes

    SciTech Connect (OSTI)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  2. Linux Kernel Error Detection and Correction

    Energy Science and Technology Software Center (OSTI)

    2007-04-11

    EDAC-utils consists fo a library and set of utilities for retrieving statistics from the Linux Kernel Error Detection and Correction (EDAC) drivers.

  3. SAMPLING SYSTEM

    DOE Patents [OSTI]

    Hannaford, B.A.; Rosenberg, R.; Segaser, C.L.; Terry, C.L.

    1961-01-17

    An apparatus is given for the batch sampling of radioactive liquids such as slurries from a system by remote control, while providing shielding for protection of operating personnel from the harmful effects of radiation.

  4. Sampling box

    DOE Patents [OSTI]

    Phillips, Terrance D.; Johnson, Craig

    2000-01-01

    An air sampling box that uses a slidable filter tray and a removable filter cartridge to allow for the easy replacement of a filter which catches radioactive particles is disclosed.

  5. WRAP Module 1 sampling and analysis plan

    SciTech Connect (OSTI)

    Mayancsik, B.A.

    1995-03-24

    This document provides the methodology to sample, screen, and analyze waste generated, processed, or otherwise the responsibility of the Waste Receiving and Processing Module 1 facility. This includes Low-Level Waste, Transuranic Waste, Mixed Waste, and Dangerous Waste.

  6. runtime error message: "readControlMsg: System returned error Connection

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    timed out on TCP socket fd" readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd" June 30, 2015 Symptom User jobs with sinlge or multiple apruns in a batch script may get this run time error: "readControlMsg: System returned error Connection timed out on TCP socket fd". This problem is intermittent, sometimes resubmit works. This error

  7. Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140; Preprint

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2006-07-01

    Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140 (ANSI/ASHRAE 2001, 2004), Method of Test for the Evaluation of Building Energy Analysis Computer Programs. A summary of the method is included in the ASHRAE Handbook of Fundamentals (ASHRAE 2005). This paper describes the ANSI/ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to Standard 140 and related research recommendations.

  8. Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2006-01-01

    Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2001a, 2004). A summary of the method is included in the 2005 ASHRAE Handbook--Fundamentals (ASHRAE 2005). This paper describes the ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to ASHRAE Standard 140 and related research recommendations.

  9. SAMPLING OSCILLOSCOPE

    DOE Patents [OSTI]

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  10. Sampling apparatus

    DOE Patents [OSTI]

    Gordon, Norman R.; King, Lloyd L.; Jackson, Peter O.; Zulich, Alan W.

    1989-01-01

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface.

  11. Sampling apparatus

    DOE Patents [OSTI]

    Gordon, N.R.; King, L.L.; Jackson, P.O.; Zulich, A.W.

    1989-07-18

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface. 15 figs.

  12. eGallon-methodology-final

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    traditional gallon of unleaded fuel -- the dominant fuel choice for vehicles in the U.S. eGallon Methodology The eGallon is measured as an "implicit" cost of a gallon of gasoline. ...

  13. Weekly Coal Production Estimation Methodology

    Gasoline and Diesel Fuel Update (EIA)

    Weekly Coal Production Estimation Methodology Step 1 (Estimate total amount of weekly U.S. coal production) U.S. coal production for the current week is estimated using a ratio ...

  14. Verification of unfold error estimates in the unfold operator code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  15. Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Milligan, M.

    2011-07-01

    This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.

  16. Error recovery to enable error-free message transfer between nodes of a computer network

    DOE Patents [OSTI]

    Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.

    2016-01-26

    An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.

  17. Measuring the Impact of Benchmarking & Transparency - Methodologies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example ...

  18. Chemical incident economic impact analysis methodology. (Technical...

    Office of Scientific and Technical Information (OSTI)

    Chemical incident economic impact analysis methodology. Citation Details In-Document Search Title: Chemical incident economic impact analysis methodology. You are accessing a ...

  19. Quantum error-correcting codes and devices

    DOE Patents [OSTI]

    Gottesman, Daniel

    2000-10-03

    A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.

  20. Evaluating operating system vulnerability to memory errors.

    SciTech Connect (OSTI)

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  1. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmoreĀ Ā» addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.Ā«Ā less

  2. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect (OSTI)

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect (OSTI)

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  4. Measurement of laminar burning speeds and Markstein lengths using a novel methodology

    SciTech Connect (OSTI)

    Tahtouh, Toni; Halter, Fabien; Mounaim-Rousselle, Christine [Institut PRISME, Universite d'Orleans, 8 rue Leonard de Vinci-45072, Orleans Cedex 2 (France)

    2009-09-15

    Three different methodologies used for the extraction of laminar information are compared and discussed. Starting from an asymptotic analysis assuming a linear relation between the propagation speed and the stretch acting on the flame front, temporal radius evolutions of spherically expanding laminar flames are postprocessed to obtain laminar burning velocities and Markstein lengths. The first methodology fits the temporal radius evolution with a polynomial function, while the new methodology proposed uses the exact solution of the linear relation linking the flame speed and the stretch as a fit. The last methodology consists in an analytical resolution of the problem. To test the different methodologies, experiments were carried out in a stainless steel combustion chamber with methane/air mixtures at atmospheric pressure and ambient temperature. The equivalence ratio was varied from 0.55 to 1.3. The classical shadowgraph technique was used to detect the reaction zone. The new methodology has proven to be the most robust and provides the most accurate results, while the polynomial methodology induces some errors due to the differentiation process. As original radii are used in the analytical methodology, it is more affected by the experimental radius determination. Finally, laminar burning velocity and Markstein length values determined with the new methodology are compared with results reported in the literature. (author)

  5. Neutron multiplication error in TRU waste measurements

    SciTech Connect (OSTI)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  6. Superdense coding interleaved with forward error correction

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmoreĀ Ā» to improve the performance in near-term demonstrations of superdense coding.Ā«Ā less

  7. Laser Phase Errors in Seeded FELs

    SciTech Connect (OSTI)

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  8. Energy Efficiency Indicators Methodology Booklet

    SciTech Connect (OSTI)

    Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

    2010-05-01

    This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

  9. Intel C++ compiler error: stl_iterator_base_types.h

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    C++ compiler error: stliteratorbasetypes.h Intel C++ compiler error: stliteratorbasetypes.h December 7, 2015 by Scott French Because the system-supplied version of GCC is...

  10. Error estimates for fission neutron outputs (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Error estimates for fission neutron outputs Citation Details In-Document Search Title: Error estimates for fission neutron outputs You are accessing a document from the...

  11. Internal compiler error for function pointer with identically...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott...

  12. V-235: Cisco Mobility Services Engine Configuration Error Lets...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login ...

  13. Error Estimation for Fault Tolerance in Numerical Integration...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Error Estimation for Fault Tolerance in Numerical Integration Solvers Event Sponsor: ... In numerical integration solvers, approximation error can be estimated at a low cost. We ...

  14. A posteriori error analysis of parameterized linear systems using...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A posteriori error analysis of parameterized linear systems using spectral methods. Citation Details In-Document Search Title: A posteriori error analysis of ...

  15. Table 1b. Relative Standard Errors for Effective, Occupied, and...

    U.S. Energy Information Administration (EIA) Indexed Site

    b.Relative Standard Errors Table 1b. Relative Standard Errors for Effective Occupied, and Vacant Square Footage, 1992 Building Characteristics All Buildings (thousand) Total...

  16. Accounting for Model Error in the Calibration of Physical Models...

    Office of Scientific and Technical Information (OSTI)

    Accounting for Model Error in the Calibration of Physical Models. Citation Details In-Document Search Title: Accounting for Model Error in the Calibration of Physical Models. ...

  17. Table 2b. Relative Standard Errors for Electricity Consumption...

    U.S. Energy Information Administration (EIA) Indexed Site

    2b. Relative Standard Errors for Electricity Table 2b. Relative Standard Errors for Electricity Consumption and Electricity Intensities, per Square Foot, Specific to Occupied and...

  18. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...

  19. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...

  20. Raman Thermometry: Comparing Methods to Minimize Error. (Conference...

    Office of Scientific and Technical Information (OSTI)

    Raman Thermometry: Comparing Methods to Minimize Error. Citation Details In-Document Search Title: Raman Thermometry: Comparing Methods to Minimize Error. Abstract not provided....

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermoreĀ Ā» we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.Ā«Ā less

  2. eGallon Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    eGallon Methodology eGallon Methodology The average American measures the day-to-day cost of driving by the price of a gallon of gasoline. In other words, as the price of gasoline ...

  3. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect (OSTI)

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  4. Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Ela, E.; Milligan, M.

    2011-10-01

    This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.

  5. WIPP Weatherization: Common Errors and Innovative Solutions Presentation

    Broader source: Energy.gov [DOE]

    This presentation contains information on WIPP Weatherization: Common Errors and Innovative Solutions.

  6. Energy Intensity Indicators: Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodology Energy Intensity Indicators: Methodology The files listed below contain methodology documentation and related studies that support the information presented on this website. The files are available to view and/or download as Adobe Acrobat PDF files. 2003. Energy Indicators System: Index Construction Methodology 2004. Changing the Base Year for the Index Boyd GA, and JM Roop. 2004. "A Note on the Fisher Ideal Index Decomposition for Structural Change in Energy Intensity."

  7. Siting Methodologies for Hydrokinetics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Siting Methodologies for Hydrokinetics Siting Methodologies for Hydrokinetics Report that provides an overview of the federal and state regulatory framework for hydrokinetic projects. siting_handbook_2009.pdf (2.43 MB) More Documents & Publications Siting Methodologies for Hydrokinetics EIS-0488: Final Environmental Impact Statement EIS-0493: Draft Environmental Impact Statement

  8. Errors in response calculations for beams

    SciTech Connect (OSTI)

    Wada, H.; Wurburton, G.B.

    1985-05-01

    When the finite element method is used to idealize a structure, its dynamic response can be determined from the governing matrix equation by the normal mode method or by one of the many approximate direct integration methods. In either method the approximate data of the finite element idealization are used, but further assumptions are introduced by the direct integration scheme. It is the purpose of this paper to study these errors for a simple structure. The transient flexural vibrations of a uniform cantilever beam, which is subjected to a transverse force at the free end, are determined by the Laplace transform method. Comparable responses are obtained for a finite element idealization of the beam, using the normal mode and Newmark average acceleration methods; the errors associated with the approximate methods are studied. If accuracy has priority and the quantity of data is small, the normal mode method is recommended; however, if the quantity of data is large, the Newmark method is useful.

  9. Detecting Soft Errors in Stencil based Computations

    SciTech Connect (OSTI)

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  10. Redundancy and Error Resilience in Boolean Networks

    SciTech Connect (OSTI)

    Peixoto, Tiago P.

    2010-01-29

    We consider the effect of noise in sparse Boolean networks with redundant functions. We show that they always exhibit a nonzero error level, and the dynamics undergoes a phase transition from nonergodicity to ergodicity, as a function of noise, after which the system is no longer capable of preserving a memory of its initial state. We obtain upper bounds on the critical value of noise for networks of different sparsity.

  11. Systematic errors in long baseline oscillation experiments

    SciTech Connect (OSTI)

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  12. An Optimized Autoregressive Forecast Error Generator for Wind and Load Uncertainty Study

    SciTech Connect (OSTI)

    De Mello, Phillip; Lu, Ning; Makarov, Yuri V.

    2011-01-17

    This paper presents a first-order autoregressive algorithm to generate real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast errors. The methodology aims at producing random wind and load forecast time series reflecting the autocorrelation and cross-correlation of historical forecast data sets. Five statistical characteristics are considered: the means, standard deviations, autocorrelations, and cross-correlations. A stochastic optimization routine is developed to minimize the differences between the statistical characteristics of the generated time series and the targeted ones. An optimal set of parameters are obtained and used to produce the RT, HA, and DA forecasts in due order of succession. This method, although implemented as the first-order regressive random forecast error generator, can be extended to higher-order. Results show that the methodology produces random series with desired statistics derived from real data sets provided by the California Independent System Operator (CAISO). The wind and load forecast error generator is currently used in wind integration studies to generate wind and load inputs for stochastic planning processes. Our future studies will focus on reflecting the diurnal and seasonal differences of the wind and load statistics and implementing them in the random forecast generator.

  13. Improving Memory Error Handling Using Linux

    SciTech Connect (OSTI)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    2014-07-25

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  14. Common Errors and Innovative Solutions Transcript | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Common Errors and Innovative Solutions Transcript Common Errors and Innovative Solutions Transcript An example of case studies, mainly by showing photos of errors and good examples, then discussing the purpose of the home energy professional guidelines and certification. There may be more examples of what not to do only because these were good learning opportunities. common_errors_innovative_solutions.doc (41.5 KB) More Documents & Publications WIPP Weatherization: Common Errors and

  15. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; Vogelmann, Andrew M.

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. OurmoreĀ Ā» analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.Ā«Ā less

  16. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    SciTech Connect (OSTI)

    Zanolin, M.; Vitale, S.; Makris, N.

    2010-06-15

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

  17. Methodology for flammable gas evaluations

    SciTech Connect (OSTI)

    Hopkins, J.D., Westinghouse Hanford

    1996-06-12

    There are 177 radioactive waste storage tanks at the Hanford Site. The waste generates flammable gases. The waste releases gas continuously, but in some tanks the waste has shown a tendency to trap these flammable gases. When enough gas is trapped in a tank`s waste matrix, it may be released in a way that renders part or all of the tank atmosphere flammable for a period of time. Tanks must be evaluated against previously defined criteria to determine whether they can present a flammable gas hazard. This document presents the methodology for evaluating tanks in two areas of concern in the tank headspace:steady-state flammable-gas concentration resulting from continuous release, and concentration resulting from an episodic gas release.

  18. Simulation Enabled Safeguards Assessment Methodology

    SciTech Connect (OSTI)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  19. Simulation enabled safeguards assessment methodology

    SciTech Connect (OSTI)

    Bean, Robert; Bjornard, Trond; Larson, Tom

    2007-07-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wire-frame construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed. (authors)

  20. Verification of unfold error estimates in the UFO code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  1. Methodology for Estimating Solar Potential on Multiple Building Rooftops for Photovoltaic Systems

    SciTech Connect (OSTI)

    Kodysh, Jeffrey B; Omitaomu, Olufemi A; Bhaduri, Budhendra L; Neish, Bradley S

    2013-01-01

    In this paper, a methodology for estimating solar potential on multiple building rooftops is presented. The objective of this methodology is to estimate the daily or monthly solar radiation potential on individual buildings in a city/region using Light Detection and Ranging (LiDAR) data and a geographic information system (GIS) approach. Conceptually, the methodology is based on the upward-looking hemispherical viewshed algorithm, but applied using an area-based modeling approach. The methodology considers input parameters, such as surface orientation, shadowing effect, elevation, and atmospheric conditions, that influence solar intensity on the earth s surface. The methodology has been implemented for some 212,000 buildings in Knox County, Tennessee, USA. Based on the results obtained, the methodology seems to be adequate for estimating solar radiation on multiple building rooftops. The use of LiDAR data improves the radiation potential estimates in terms of the model predictive error and the spatial pattern of the model outputs. This methodology could help cities/regions interested in sustainable projects to quickly identify buildings with higher potentials for roof-mounted photovoltaic systems.

  2. Spectral Characteristics of Background Error Covariance and Multiscale Data Assimilation: Background Error Covariance and Multiscale Data Assimilation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Cheng, Xiaoping; Gustafson, William I.; Vogelmann, Andrew M.

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. OurmoreĀ Ā» analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.Ā«Ā less

  3. Methodology for EIA Weekly Underground Natural Gas Storage Estimates

    Weekly Natural Gas Storage Report (EIA)

    Methodology for EIA Weekly Underground Natural Gas Storage Estimates Latest Update: November 16, 2015 This report consists of the following sections: Survey and Survey Processing - a description of the survey and an overview of the program Sampling - a description of the selection process used to identify companies in the survey Estimation - how the regional estimates are prepared from the collected data Computing the Five-year Averages, Maxima, Minima, and Year-Ago Values for the Weekly Natural

  4. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    SciTech Connect (OSTI)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-02-27

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Salmon, Mississippi, Site, Water Sampling Location Map .........5 Water Sampling Field Activities Verification ...

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........1 Water Sampling Locations at the Rulison, .........3 Water Sampling Field Activities Verification ...

  7. Error Reduction for Weigh-In-Motion

    SciTech Connect (OSTI)

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  8. Error Reduction in Weigh-In-Motion

    Energy Science and Technology Software Center (OSTI)

    2007-09-21

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bounding and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with lessmoreĀ Ā» effort (elimination of redundant weighing)Ā«Ā less

  9. Waste Package Design Methodology Report

    SciTech Connect (OSTI)

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  10. Methodology for Validating Building Energy Analysis Simulations

    SciTech Connect (OSTI)

    Judkoff, R.; Wortman, D.; O'Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  11. Seismic Fracture Characterization Methodologies for Enhanced Geothermal

    Office of Scientific and Technical Information (OSTI)

    Systems (Technical Report) | SciTech Connect Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems Citation Details In-Document Search Title: Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems Executive Summary The overall objective of this work was the development of surface and borehole seismic methodologies using both compressional and shear waves for characterizing faults and fractures in Enhanced Geothermal Systems. We used both

  12. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single path—the shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

  13. Resolved: "error while loading shared libraries: libalpslli.so...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on...

  14. MPI errors from cray-mpich/7.3.0

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MPI errors from cray-mpich7.3.0 MPI errors from cray-mpich7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping...

  15. Siting Methodologies for Hydrokinetics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Siting Methodologies for Hydrokinetics Report that provides an overview of the federal and state regulatory framework for hydrokinetic projects. PDF icon sitinghandbook2009.pdf ...

  16. Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Consumption Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology to Identify Potential Reductions in Utility and Process Energy Consumption This case ...

  17. Development of Nonlinear SSI Time Domain Methodology

    Broader source: Energy.gov [DOE]

    Development of Nonlinear SSI Time Domain Methodology Justin Coleman, P.E. Nuclear Science and Technology Idaho National Laboratory October 22, 2014

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Groundwater and Surface Water Sampling at the Slick Rock, Colorado, Processing Sites .........7 Water Sampling Field Activities Verification ...

  19. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Green River, Utah, Disposal Site August 2014 LMSGRN.........7 Water Sampling Field Activities Verification ...

  20. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and May 2014 Groundwater and Surface Water Sampling at the Shiprock, New Mexico, Disposal .........9 Water Sampling Field Activities Verification ...

  1. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Rio Blanco, Colorado, Site October 2014 LMSRBLS00514 .........5 Water Sampling Field Activities Verification ...

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Natural Gas and Produced Water Sampling at the Rulison, Colorado, Site November 2014 LMS.........3 Water Sampling Field Activities Verification ...

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Groundwater and Surface Water Sampling at the Rulison, Colorado, Site October 2015 LMS.........5 Water Sampling Field Activities Verification ...

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Monticello, Utah, Processing Site July 2015 LMSMNT.........7 Water Sampling Field Activities Verification ...

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    2015 Groundwater and Surface Water Sampling at the Shiprock, New Mexico, Disposal Site .........9 Water Sampling Field Activities Verification ...

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Rio Blanco, Colorado, Site October 2015 LMSRBLS00515 .........5 Water Sampling Field Activities Verification ...

  7. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Produced Water Sampling at the Rulison, Colorado, Site May 2015 LMSRULS00115 Available .........3 Water Sampling Field Activities Verification ...

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Natural Gas and Produced Water Sampling at the Gasbuggy, New Mexico, Site December 2013 .........5 Water Sampling Field Activities Verification ...

  9. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Produced Water Sampling at the Rulison, Colorado, Site January 2016 LMSRULS00915 .........3 Water Sampling Field Activities Verification ...

  10. September 2004 Water Sampling

    Office of Legacy Management (LM)

    3 Groundwater and Surface Water Sampling at the Monument Valley, Arizona, Processing Site .........7 Water Sampling Field Activities Verification ...

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    July 2015 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing .........5 Water Sampling Field Activities Verification ...

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Monticello, Utah, Processing Site July 2014 LMSMNT.........7 Water Sampling Field Activities Verification ...

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    3 Water Sampling at the Monticello, Utah, Processing Site January 2014 LMSMNTS01013 This .........7 Water Sampling Field Activities Verification ...

  14. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Naturita, Colorado Processing Site October 2013 LMSNAP.........5 Water Sampling Field Activities Verification ...

  15. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing Site .........5 Water Sampling Field Activities Verification ...

  16. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Tuba City, Arizona, Disposal Site November 2013 LMSTUB.........9 Water Sampling Field Activities Verification ...

  17. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Groundwater and Surface Water Sampling at the Monticello, Utah, Processing Site January .........7 Water Sampling Field Activities Verification ...

  18. Cyber-Informed Engineering: The Need for a New Risk Informed and Design Methodology

    SciTech Connect (OSTI)

    Price, Joseph Daniel; Anderson, Robert Stephen

    2015-06-01

    Current engineering and risk management methodologies do not contain the foundational assumptions required to address the intelligent adversaryā€™s capabilities in malevolent cyber attacks. Current methodologies focus on equipment failures or human error as initiating events for a hazard, while cyber attacks use the functionality of a trusted system to perform operations outside of the intended design and without the operatorā€™s knowledge. These threats can by-pass or manipulate traditionally engineered safety barriers and present false information, invalidating the fundamental basis of a safety analysis. Cyber threats must be fundamentally analyzed from a completely new perspective where neither equipment nor human operation can be fully trusted. A new risk analysis and design methodology needs to be developed to address this rapidly evolving threatscape.

  19. Locked modes and magnetic field errors in MST

    SciTech Connect (OSTI)

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.

  20. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect (OSTI)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  1. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, Johann; Koren, Yoram

    1993-08-24

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the "confidence" of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  2. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, J.; Koren, Y.

    1993-08-24

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the confidence'' of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  3. Culture, and a Metrics Methodology for Biological Countermeasure Scenarios

    SciTech Connect (OSTI)

    Simpson, Mary J.

    2007-03-15

    Outcome Metrics Methodology defines a way to evaluate outcome metrics associated with scenario analyses related to biological countermeasures. Previous work developed a schema to allow evaluation of common elements of impacts across a wide range of potential threats and scenarios. Classes of metrics were identified that could be used by decision makers to differentiate the common bases among disparate scenarios. Typical impact metrics used in risk calculations include the anticipated number of deaths, casualties, and the direct economic costs should a given event occur. There are less obvious metrics that are often as important and require more intensive initial work to be incorporated. This study defines a methodology for quantifying, evaluating, and ranking metrics other than direct health and economic impacts. As has been observed with the consequences of Hurricane Katrina, impacts to the culture of specific sectors of society are less obvious on an immediate basis but equally important over the ensuing and long term. Culture is used as the example class of metrics within which ā€¢ requirements for a methodology are explored ā€¢ likely methodologies are examined ā€¢ underlying assumptions for the respective methodologies are discussed ā€¢ the basis for recommending a specific methodology is demonstrated. Culture, as a class of metrics, is shown to consist of political, sociological, and psychological elements that are highly valued by decision makers. In addition, cultural practices, dimensions, and kinds of knowledge offer complementary sets of information that contribute to the context within which experts can provide input. The quantification and evaluation of sociopolitical, socio-economic, and sociotechnical impacts depend predominantly on subjective, expert judgment. Epidemiological data is limited, resulting in samples with statistical limits. Dose response assessments and curves depend on the quality of data and its relevance to human modes of exposure

  4. A technique for human error analysis (ATHEANA)

    SciTech Connect (OSTI)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  5. LCLS Sample Preparation Laboratory | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LCLS Sample Preparation Laboratory Kayla Zimmerman | (650) 926-6281 Lisa Hammon, LCLS Lab Coordinator Welcome to the LCLS Sample Preparation Laboratory. This small general use wet lab is located in Rm 109 of the Far Experimental Hall near the MEC, CXI, and XCS hutches. It conveniently serves all LCLS hutches and is available for final stage sample preparation. Due to space limitations, certain types of activities may be restricted and all access must be scheduled in advance. User lab bench

  6. Particle Measurement Methodology: Comparison of On-road and Lab Diesel

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Particle Size Distributions | Department of Energy Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions 2002 DEER Conference Presentation: University of Minnesota 2002_deer_kittelson2.pdf (360.23 KB) More Documents & Publications Gasoline Vehicle Exhuast Particle Sampling Study Nanoparticle Emissions from Internal Combustion Engines Review of Diesel

  7. Photovoltaic module energy rating methodology development

    SciTech Connect (OSTI)

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L.; Whitaker, C.; Newmiller, J.

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  8. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect (OSTI)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  9. A method for the quantification of model form error associated with physical systems.

    SciTech Connect (OSTI)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  10. NSD Methodology Report | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    NSD Methodology Report NSDMethodologyReport.pdf (4.46 MB) More Documents & Publications New Stream-reach Development (NSD) Final Report and Fact Sheet An Assessment of Energy ...

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... Inductively Coupled Plasma (ICP) Interference Check Sample (ICS) Analysis ICP interference check samples ICSA and ICSAB were analyzed at the required frequency to verify the ...

  12. Visual Sample Plan Flyer

    Office of Energy Efficiency and Renewable Energy (EERE)

    This flyer better explains that VSP is a free, easy-to-use software tool that supports development of optimal sampling plans based on statistical sampling theory.

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Groundwater and Surface Water Sampling at the Tuba City, Arizona Disposal Site June 2015 .........7 Water Sampling Field Activities Verification ...

  14. Methodology for Monthly Crude Oil Production Estimates

    U.S. Energy Information Administration (EIA) Indexed Site

    015 U.S. Energy Information Administration | Methodology for Monthly Crude Oil Production Estimates 1 Methodology for Monthly Crude Oil Production Estimates Executive summary The U.S. Energy Information Administration (EIA) relies on data from state and other federal agencies and does not currently collect survey data directly from crude oil producers. Summarizing the estimation process in terms of percent of U.S. production: * 20% is based on state agency data, including North Dakota and

  15. Polaractivation for classical zero-error capacity of qudit channels

    SciTech Connect (OSTI)

    Gyongyosi, Laszlo; Imre, Sandor

    2014-12-04

    We introduce a new phenomenon for zero-error transmission of classical information over quantum channels that initially were not able for zero-error classical communication. The effect is called polaractivation, and the result is similar to the superactivation effect. We use the Choi-Jamiolkowski isomorphism and the Schmidt-theorem to prove the polaractivation of classical zero-error capacity and define the polaractivator channel coding scheme.

  16. Internal compiler error for function pointer with identically named

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    arguments Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott French, NERSC USG Status: Bug 21435 reported to PGI For pgcc versions after 12.x (up through 12.9 is fine, but 13.x and 14.x are not), you may observe an internal compiler error associated with function pointer prototypes when named arguments are used. Specifically, if a function pointer type is defined

  17. A design methodology for unattended monitoring systems

    SciTech Connect (OSTI)

    SMITH,JAMES D.; DELAND,SHARON M.

    2000-03-01

    The authors presented a high-level methodology for the design of unattended monitoring systems, focusing on a system to detect diversion of nuclear materials from a storage facility. The methodology is composed of seven, interrelated analyses: Facility Analysis, Vulnerability Analysis, Threat Assessment, Scenario Assessment, Design Analysis, Conceptual Design, and Performance Assessment. The design of the monitoring system is iteratively improved until it meets a set of pre-established performance criteria. The methodology presented here is based on other, well-established system analysis methodologies and hence they believe it can be adapted to other verification or compliance applications. In order to make this approach more generic, however, there needs to be more work on techniques for establishing evaluation criteria and associated performance metrics. They found that defining general-purpose evaluation criteria for verifying compliance with international agreements was a significant undertaking in itself. They finally focused on diversion of nuclear material in order to simplify the problem so that they could work out an overall approach for the design methodology. However, general guidelines for the development of evaluation criteria are critical for a general-purpose methodology. A poor choice in evaluation criteria could result in a monitoring system design that solves the wrong problem.

  18. Review and evaluation of paleohydrologic methodologies

    SciTech Connect (OSTI)

    Foley, M.G.; Zimmerman, D.A.; Doesburg, J.M.; Thorne, P.D.

    1982-12-01

    A literature review was conducted to identify methodologies that could be used to interpret paleohydrologic environments. Paleohydrology is the study of past hydrologic systems or of the past behavior of an existing hydrologic system. The purpose of the review was to evaluate how well these methodologies could be applied to the siting of low-level radioactive waste facilities. The computer literature search queried five bibliographical data bases containing over five million citations of technical journals, books, conference papers, and reports. Two data-base searches (United States Geological Survey - USGS) and a manual search were also conducted. The methodologies were examined for data requirements and sensitivity limits. Paleohydrologic interpretations are uncertain because of the effects of time on hydrologic and geologic systems and because of the complexity of fluvial systems. Paleoflow determinations appear in many cases to be order-of-magnitude estimates. However, the methodologies identified in this report mitigate this uncertainty when used collectively as well as independently. That is, the data from individual methodologies can be compared or combined to corroborate hydrologic predictions. In this manner, paleohydrologic methodologies are viable tools to assist in evaluating the likely future hydrology of low-level radioactive waste sites.

  19. WIPP Weatherization: Common Errors and Innovative Solutions Presentati...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    More Documents & Publications Common Errors and Innovative Solutions Transcript Building ... America Best Practices Series: Volume 12. Energy Renovations-Insulation: A Guide for ...

  20. Output-Based Error Estimation and Adaptation for Uncertainty...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Output-Based Error Estimation and Adaptation for Uncertainty Quantification Isaac M. Asher and Krzysztof J. Fidkowski University of Michigan US National Congress on Computational...

  1. Platform-Independent Method for Detecting Errors in Metagenomic...

    Office of Scientific and Technical Information (OSTI)

    Title: Platform-Independent Method for Detecting Errors in Metagenomic Sequencing Data: DRISEE Authors: Keegan, K. P. ; Trimble, W. L. ; Wilkening, J. ; Wilke, A. ; Harrison, T. ; ...

  2. Detecting and correcting hard errors in a memory array

    DOE Patents [OSTI]

    Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.

    2015-11-19

    Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.

  3. Info-Gap Analysis of Truncation Errors in Numerical Simulations...

    Office of Scientific and Technical Information (OSTI)

    Title: Info-Gap Analysis of Truncation Errors in Numerical Simulations. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. ...

  4. Info-Gap Analysis of Numerical Truncation Errors. (Conference...

    Office of Scientific and Technical Information (OSTI)

    Title: Info-Gap Analysis of Numerical Truncation Errors. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. Publication ...

  5. Table 6b. Relative Standard Errors for Total Electricity Consumption...

    U.S. Energy Information Administration (EIA) Indexed Site

    b. Relative Standard Errors for Total Electricity Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Electricity (thousand) Total...

  6. Accounting for Model Error in the Calibration of Physical Models

    Office of Scientific and Technical Information (OSTI)

    ... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...

  7. Handling Model Error in the Calibration of Physical Models

    Office of Scientific and Technical Information (OSTI)

    ... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...

  8. Confirmation of standard error analysis techniques applied to...

    Office of Scientific and Technical Information (OSTI)

    reported parameter errors are not reliable in many EXAFS studies in the literature. ... Country of Publication: United States Language: English Subject: 75; ABSORPTION; ACCURACY; ...

  9. U-058: Apache Struts Conversion Error OGNL Expression Injection...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    in Apache Struts. A remote user can execute arbitrary commands on the target system. PLATFORM: Apache Struts 2.x ABSTRACT: Apache Struts Conversion Error OGNL Expression...

  10. Development of a statistically based access delay timeline methodology.

    SciTech Connect (OSTI)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  11. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    SciTech Connect (OSTI)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors.

    To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15Ā° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m sāˆ’1) and errors in the vertical velocity measurement

  12. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormoreĀ Ā» complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15Ā° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m sāˆ’1) and errors in the vertical velocity measurement exceed the actual

  13. Scheme for precise correction of orbit variation caused by dipole error field of insertion device

    SciTech Connect (OSTI)

    Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.

    2005-05-15

    We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.

  14. Accuracy of the European solar water heater test procedure. Part 1: Measurement errors and parameter estimates

    SciTech Connect (OSTI)

    Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.

    1991-01-01

    The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.

  15. Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry

    SciTech Connect (OSTI)

    Dilley, Lorie M.

    2015-04-13

    The purpose of this project was to: 1) evaluate the relationship between geothermal fluid processes and the compositions of the fluid inclusion gases trapped in the reservoir rocks; and 2) develop methodologies for interpreting fluid inclusion gas data in terms of the chemical, thermal and hydrological properties of geothermal reservoirs. Phase 1 of this project was designed to conduct the following: 1) model the effects of boiling, condensation, conductive cooling and mixing on selected gaseous species; using fluid compositions obtained from geothermal wells, 2) evaluate, using quantitative analyses provided by New Mexico Tech (NMT), how these processes are recorded by fluid inclusions trapped in individual crystals; and 3) determine if the results obtained on individual crystals can be applied to the bulk fluid inclusion analyses determined by Fluid Inclusion Technology (FIT). Our initial studies however, suggested that numerical modeling of the data would be premature. We observed that the gas compositions, determined on bulk and individual samples were not the same as those discharged by the geothermal wells. Gases discharged from geothermal wells are CO2-rich and contain low concentrations of light gases (i.e. H2, He, N, Ar, CH4). In contrast many of our samples displayed enrichments in these light gases. Efforts were initiated to evaluate the reasons for the observed gas distributions. As a first step, we examined the potential importance of different reservoir processes using a variety of commonly employed gas ratios (e.g. Giggenbach plots). The second technical target was the development of interpretational methodologies. We have develop methodologies for the interpretation of fluid inclusion gas data, based on the results of Phase 1, geologic interpretation of fluid inclusion data, and integration of the data. These methodologies can be used in conjunction with the relevant geological and hydrological information on the system to

  16. Two-stage sampling for acceptance testing

    SciTech Connect (OSTI)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  17. Two-stage sampling for acceptance testing

    SciTech Connect (OSTI)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  18. Fluid sampling tool

    DOE Patents [OSTI]

    Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.

    2000-01-01

    A fluid-sampling tool for obtaining a fluid sample from a container. When used in combination with a rotatable drill, the tool bores a hole into a container wall, withdraws a fluid sample from the container, and seals the borehole. The tool collects fluid sample without exposing the operator or the environment to the fluid or to wall shavings from the container.

  19. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Equipment Blank Data ...

  20. The Sample Preparation Laboratories | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Cynthia Patty 1 Sam Webb 2 John Bargar 3 Arizona 4 Chemicals 5 Team Work 6 Bottles 7 Glass 8 Plan Ahead! See the tabs above for Laboratory Access and forms you'll need to complete. Equipment and Chemicals tabs detail resources already available on site. Avoid delays! Hazardous materials use may require a written Standard Operating Procedure (SOP) before you work. Check the Chemicals tab for more information. The Sample Preparation Laboratories The Sample Preparation Laboratories provide wet lab

  1. Hydrologic characterization of fractured rocks: An interdisciplinary methodology

    SciTech Connect (OSTI)

    Long, J.C.S.; Majer, E.L.; Martel, S.J.; Karasaki, K.; Peterson, J.E. Jr.; Davey, A.; Hestir, K. )

    1990-11-01

    The characterization of fractured rock is a critical problem in the development of nuclear waste repositories in geologic media. A good methodology for characterizing these systems should be focused on the large important features first and concentrate on building numerical models which can reproduce the observed hydrologic behavior of the fracture system. In many rocks, fracture zones dominate the behavior. These can be described using the tools of geology and geomechanics in order to understand what kind of features might be important hydrologically and to qualitatively describe the way flow might occur in the rock. Geophysics can then be employed to locate these features between boreholes. Then well testing can be used to see if the identified features are in fact important. Given this information, a conceptual model of the system can be developed which honors the geologic description, the tomographic data and the evidence of high permeability. Such a model can then be modified through an inverse process, such as simulated annealing, until it reproduces the cross-hole well test behavior which has been observed insitu. Other possible inversion techniques might take advantage of self similar structure. Once a model is constructed, we need to see how well the model makes predictions. We can use a cross-validation technique which sequentially puts aside parts of the data and uses the model to predict that part in order to calculate the prediction error. This approach combines many types of information in a methodology which can be modified to fit a particular field site. 114 refs., 81 figs., 7 tabs.

  2. Critical infrastructure systems of systems assessment methodology.

    SciTech Connect (OSTI)

    Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

    2006-10-01

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

  3. Error localization in RHIC by fitting difference orbits

    SciTech Connect (OSTI)

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  4. Rain sampling device

    DOE Patents [OSTI]

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  5. Rain sampling device

    DOE Patents [OSTI]

    Nelson, Danny A.; Tomich, Stanley D.; Glover, Donald W.; Allen, Errol V.; Hales, Jeremy M.; Dana, Marshall T.

    1991-01-01

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  6. Application of Random Vibration Theory Methodology for Seismic...

    Energy Savers [EERE]

    Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis Application of Random Vibration Theory Methodology for Seismic Soil-Structure...

  7. SASSI Methodology-Based Sensitivity Studies for Deeply Embedded...

    Office of Environmental Management (EM)

    SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded...

  8. Validation of Hydrogen Exchange Methodology on Molecular Sieves...

    Office of Environmental Management (EM)

    Validation of Hydrogen Exchange Methodology on Molecular Sieves for Tritium Removal from Contaminated Water Validation of Hydrogen Exchange Methodology on Molecular Sieves for ...

  9. Particle Measurement Methodology: Comparison of On-road and Lab...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size ...

  10. Evaluation of the European PMP Methodologies Using Chassis Dynamometer...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    the European PMP Methodologies Using Chassis Dynamometer and On-road Testing of Heavy-duty Vehicles Evaluation of the European PMP Methodologies Using Chassis Dynamometer and ...

  11. Modeling of Diesel Exhaust Systems: A methodology to better simulate...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Discussed ...

  12. Biopower Report Presents Methodology for Assessing the Value...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of Co-Firing...

  13. Barr Engineering Statement of Methodology Rosemount Wind Turbine...

    Energy Savers [EERE]

    Barr Engineering Statement of Methodology Rosemount Wind Turbine Simulations by Truescape Visual Reality, DOEEA-1791 (May 2010) Barr Engineering Statement of Methodology Rosemount...

  14. Seismic hazard methodology for the Central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    Central and Eastern United States: Volume 1: Part 2, Methodology (Revision 1): Final report Citation Details In-Document Search Title: Seismic hazard methodology for the Central ...

  15. Seismic hazard methodology for the central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    Title: Seismic hazard methodology for the central and Eastern United States: Volume 1, Part 1: Theory: Final report The NRC staff concludes that SOGEPRI Seismic Hazard Methodology...

  16. A Proposed Methodology to Determine the Leverage Impacts of Technology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment ...

  17. Science-based MEMS reliability methodology. (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Science-based MEMS reliability methodology. Citation Details In-Document Search Title: Science-based MEMS reliability methodology. No abstract prepared. Authors: Walraven, Jeremy ...

  18. VERA Core Simulator Methodology for PWR Cycle Depletion (Conference...

    Office of Scientific and Technical Information (OSTI)

    VERA Core Simulator Methodology for PWR Cycle Depletion Citation Details In-Document Search Title: VERA Core Simulator Methodology for PWR Cycle Depletion Authors: Kochunas, ...

  19. On the UQ methodology development for storage applications. ...

    Office of Scientific and Technical Information (OSTI)

    On the UQ methodology development for storage applications. Citation Details In-Document Search Title: On the UQ methodology development for storage applications. Abstract not ...

  20. Prototype integration of the joint munitions assessment and planning model with the OSD threat methodology

    SciTech Connect (OSTI)

    Lynn, R.Y.S.; Bolmarcich, J.J.

    1994-06-01

    The purpose of this Memorandum is to propose a prototype procedure which the Office of Munitions might employ to exercise, in a supportive joint fashion, two of its High Level Conventional Munitions Models, namely, the OSD Threat Methodology and the Joint Munitions Assessment and Planning (JMAP) model. The joint application of JMAP and the OSD Threat Methodology provides a tool to optimize munitions stockpiles. The remainder of this Memorandum comprises five parts. The first is a description of the structure and use of the OSD Threat Methodology. The second is a description of JMAP and its use. The third discusses the concept of the joint application of JMAP and OSD Threat Methodology. The fourth displays sample output of the joint application. The fifth is a summary and epilogue. Finally, three appendices contain details of the formulation, data, and computer code.

  1. Systematic Comparison of Operating Reserve Methodologies: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Krad, I.; Ela, E.

    2014-04-01

    Operating reserve requirements are a key component of modern power systems, and they contribute to maintaining reliable operations with minimum economic impact. No universal method exists for determining reserve requirements, thus there is a need for a thorough study and performance comparison of the different existing methodologies. Increasing penetrations of variable generation (VG) on electric power systems are posed to increase system uncertainty and variability, thus the need for additional reserve also increases. This paper presents background information on operating reserve and its relationship to VG. A consistent comparison of three methodologies to calculate regulating and flexibility reserve in systems with VG is performed.

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... 100, 17B, 1A, 72, and 81 were classified as Category II. The sample results were qualified with a "Q" flag, indicating the data are qualitative because of the sampling technique. ...

  3. Water and Sediment Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MDC Blank 7222014 Below MDC Below MDC Water Sampling Results Location Sample Date WIPP ... Tut Tank 3132014 Below MDC Below MDC Fresh Water Tank 3122014 Below MDC Below MDC Hill ...

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... the applicable MDL. Inductively Coupled Plasma Interference Check Sample Analysis ... and background correction factors for all inductively coupled plasma instruments. ...

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........7 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Time-Concentration Graphs ...

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........9 Water Sampling Field Activities Verification ... Data Durango Processing Site Surface Water Quality Data Equipment Blank Data Static ...

  7. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........3 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Natural Gas Analysis Data ...

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Hydrographs Time-Concentration ...

  9. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Hydrograph Time-Concentration ...

  10. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Time-Concentration Graph ...

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Quality Data Equipment Blank Data Static Water Level Data Time-Concentration Graphs ...

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Time-Concentration Graphs ...

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........9 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Static Water Level Data ...

  14. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........3 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Time-Concentration Graphs ...

  15. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........7 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Equipment Blank Data Static ...

  16. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Equipment Blank Data Static ...

  17. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Water Sampling at the Ambrosia Lake, New Mexico, Disposal Site February 2015 LMS/AMB/S01114 This page intentionally left blank U.S. Department of Energy DVP-November 2014, Ambrosia Lake, New Mexico February 2015 RIN 14116607 Page i Contents Sampling Event Summary ...............................................................................................................1 Ambrosia Lake, NM, Disposal Site Planned Sampling Map...........................................................3 Data

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Sampling at the Ambrosia Lake, New Mexico, Disposal Site March 2016 LMS/AMB/S01215 This page intentionally left blank U.S. Department of Energy DVP-December 2015, Ambrosia Lake, New Mexico March 2016 RIN 15117494 Page i Contents Sampling Event Summary ...............................................................................................................1 Ambrosia Lake, NM, Disposal Site Planned Sampling Map...........................................................3 Data Assessment

  19. September 2004 Water Sampling

    Office of Legacy Management (LM)

    October 2013 Groundwater Sampling at the Bluewater, New Mexico, Disposal Site December 2013 LMS/BLU/S00813 This page intentionally left blank U.S. Department of Energy DVP-August and October 2013, Bluewater, New Mexico December 2013 RIN 13085537 and 13095651 Page i Contents Sampling Event Summary ...............................................................................................................1 Private Wells Sampled August 2013 and October 2013, Bluewater, NM, Disposal Site

  20. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Monument Valley, Arizona, Processing Site February 2015 LMS/MON/S01214 This page intentionally left blank U.S. Department of Energy DVP-December 2014, Monument Valley, Arizona February 2015 RIN 14126645 Page i Contents Sampling Event Summary ...............................................................................................................1 Monument Valley, Arizona, Disposal Site Sample Location Map ..................................................5

  1. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Alternate Water Supply System Sampling at the Riverton, Wyoming, Processing Site May 2014 LMS/RVT/S00314 This page intentionally left blank U.S. Department of Energy DVP-March 2014, Riverton, Wyoming May 2014 RIN 14035986 Page i Contents Sampling Event Summary ...............................................................................................................1 Riverton, WY, Processing Site, Sample Location Map ...................................................................3 Data

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    February 2015 Groundwater and Surface Water Sampling at the Grand Junction, Colorado, Site April 2015 LMS/GJO/S00215 This page intentionally left blank U.S. Department of Energy DVP-February 2015, Grand Junction, Colorado, Site April 2015 RIN 15026795 Page i Contents Sampling Event Summary ...............................................................................................................1 Grand Junction, Colorado, Site Sample Location Map

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Sampling at the Grand Junction, Colorado, Disposal Site November 2013 LMS/GRJ/S00813 This page intentionally left blank U.S. Department of Energy DVP-August 2013, Grand Junction, Colorado November 2013 RIN 13075515 Page i Contents Sampling Event Summary ...............................................................................................................1 Grand Junction, Colorado, Disposal Site Sample Location Map ....................................................3 Data Assessment

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Old and New Rifle, Colorado, Processing Sites August 2013 LMS/RFN/RFO/S00613 This page intentionally left blank U.S. Department of Energy DVP-June 2013, Rifle, Colorado August 2013 RIN 13065380 Page i Contents Sampling Event Summary ...............................................................................................................1 Sample Location Map, New Rifle, Colorado, Processing Site ........................................................5 Sample Location Map, Old Rifle,

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Groundwater and Surface Water Sampling at the Slick Rock East and West, Colorado, Processing Sites November 2013 LMS/SRE/SRW/S0913 This page intentionally left blank U.S. Department of Energy DVP-September 2013, Slick Rock, Colorado November 2013 RIN 13095593 Page i Contents Sampling Event Summary ...............................................................................................................1 Slick Rock East and West, Colorado, Processing Sites, Sample Location Map

  6. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect (OSTI)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  7. Balancing aggregation and smoothing errors in inverse models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Turner, A. J.; Jacob, D. J.

    2015-01-13

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmoreĀ Ā» state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.Ā«Ā less

  8. Balancing aggregation and smoothing errors in inverse models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmoreĀ Ā» state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.Ā«Ā less

  9. Aerosol sampling system

    DOE Patents [OSTI]

    Masquelier, Donald A.

    2004-02-10

    A system for sampling air and collecting particulate of a predetermined particle size range. A low pass section has an opening of a preselected size for gathering the air but excluding particles larger than the sample particles. An impactor section is connected to the low pass section and separates the air flow into a bypass air flow that does not contain the sample particles and a product air flow that does contain the sample particles. A wetted-wall cyclone collector, connected to the impactor section, receives the product air flow and traps the sample particles in a liquid.

  10. Link error from craype/2.5.0

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Link error from craype/2.5.0 Link error from craype/2.5.0 January 13, 2016 by Woo-Sun Yang If you build a code using a file called 'configure' with craype/2.5.0, Cray build-tools assumes that you want to use the 'native' link mode (e.g., gcc defaults to dynamic linking), by adding '-Wl,-rpath=/opt/intel/composer_xe_2015/compiler/lib/intel64 -lintlc'. This creates a link error: /usr/bin/ld: cannot find -lintlc A temporary work around is to swap the default craype (2.5.0) with an older or newer