National Library of Energy BETA

Sample records for methodology sampling error

  1. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect (OSTI)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soils characteristics. Most often, spatial variability in the soils fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soils fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted

  2. Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Mesurements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Measurements Z. Li, M. C. Cribb, and F.-L. Chang Earth System Science Interdisciplinary Center University of Maryland College Park, Maryland A. P. Trishchenko and Y. Luo Canada Centre for Remote Sensing Ottawa, Ontario, Canada Introduction Radiation measurements have been widely employed for evaluating cloud parameterization schemes and model simulation results. As the most comprehensive program aiming to improve cloud

  3. Real-time quadrupole mass spectrometer analysis of gas in boreholefluid samples acquired using the U-Tube sampling methodology

    SciTech Connect (OSTI)

    Freifeld, Barry M.; Trautz, Robert C.

    2006-01-11

    Sampling of fluids in deep boreholes is challenging becauseof the necessity to minimize external contamination and maintain sampleintegrity during recovery. The U-tube sampling methodology was developedto collect large volume, multiphase samples at in situ pressures. As apermanent or semi-permanent installation, the U-tube can be used forrapidly acquiring multiple samples or it may be installed for long-termmonitoring applications. The U-tube was first deployed in Liberty County,TX to monitor crosswell CO2 injection as part of the Frio CO2sequestration experiment. Analysis of gases (dissolved or separate phase)was performed in the field using a quadrupole mass spectrometer, whichserved as the basis for determining the arrival of the CO2 plume. Thepresence of oxygen and argon in elevated concentrations, along withreduced methane concentration, indicate sample alteration caused by theintroduction of surface fluids during borehole completion. Despiteproducing the well to eliminate non-native fluids, measurementsdemonstrate that contamination persists until the immiscible CO2injection swept formation fluid into the observationwellbore.

  4. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    SciTech Connect (OSTI)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  5. Errors of Nonobservation

    U.S. Energy Information Administration (EIA) Indexed Site

    Errors of Nonobservation Finally, several potential sources of nonsampling error and bias result from errors of nonobservation. The 1994 MECS represents, in terms of sampling...

  6. DEVELOPMENT OF METHODOLOGY AND FIELD DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE

    SciTech Connect (OSTI)

    Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

    2012-06-04

    This project developed methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of the fuel storage medium and determine the oxide thickness on the spent fuel basin materials. The overall objective of this project was to determine the amount of time fuel has spent in a storage basin to determine if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations. This project developed and validated forensic tools that can be used to predict the age and condition of spent nuclear fuels stored in liquid basins based on key physical, chemical and microbiological basin characteristics. Key parameters were identified based on a literature review, the parameters were used to design test cells for corrosion analyses, tools were purchased to analyze the key parameters, and these were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The key parameters identified in the literature review included chloride concentration, conductivity, and total organic carbon level. Focus was also placed on aluminum based cladding because of their application to weapons production. The literature review was helpful in identifying important parameters, but relationships between these parameters and corrosion rates were not available. Bench scale test systems were designed, operated, harvested, and analyzed to determine corrosion relationships between water parameters and water conditions, chemistry and microbiological conditions. The data from the bench scale system indicated that corrosion rates were dependent on total organic carbon levels and chloride concentrations. The highest corrosion rates were observed in test cells amended with sediment, a large microbial inoculum and an organic carbon source. A complete characterization test kit was field tested to characterize the SRS L-Area spent fuel basin. The sampling kit consisted of a TOC analyzer, a YSI

  7. Tuning the narrow-band beam position monitor sampling clock to remove the aliasing errors in APS storage ring orbit measurements.

    SciTech Connect (OSTI)

    Sun, X.; Singh, O. )

    2007-01-01

    The Advanced Photon Source storage ring employs a real-time orbit correction system to reduce orbit motion up to 50 Hz. This system uses up to 142 narrow-band rf beam position monitors (Nbbpms) in a correction algorithm by sampling at a frequency of 1.53 kHz. Several Nbbpms exhibit aliasing errors in orbit measurements, rendering these Nbbpms unusable in real-time orbit feedback. The aliasing errors are caused by beating effects of the internal sampling clocks with various other processing clocks residing within the BPM electronics. A programmable external clock has been employed to move the aliasing errors out of the active frequency band of the real-time feedback system (RTFB) and rms beam motion calculation. This paper discusses the process of tuning and provides test results.

  8. Error abstractions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Error and fault abstractions Mattan Erez UT Austin *Who should care about faults and errors? *Ideally, only system cares about masked faults? - Assuming application bugs are not...

  9. Error detection method

    DOE Patents [OSTI]

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  10. Revenue Requirements Modeling System (RRMS) documentation. Volume I. Methodology description and user's guide. Appendix A: model abstract; Appendix B: technical appendix; Appendix C: sample input and output. [Compustat

    SciTech Connect (OSTI)

    Not Available

    1986-03-01

    The Revenue Requirements Modeling System (RRMS) is a utility specific financial modeling system used by the Energy Information Administration (EIA) to evaluate the impact on electric utilities of changes in the regulatory, economic, and tax environments. Included in the RRMS is a power plant life-cycle revenue requirements model designed to assess the comparative economic advantage of alternative generating plant. This report is Volume I of a 2-volume set and provides a methodology description and user's guide, a model abstract and technical appendix, and sample input and output for the models. Volume II provides an operator's manual and a program maintenance guide.

  11. EIA - Sorry! Unexpected Error

    Gasoline and Diesel Fuel Update (EIA)

    Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate ...

  12. EIA - Sorry! Unexpected Error

    U.S. Energy Information Administration (EIA) Indexed Site

    Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate...

  13. Error Page

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    script writes out the header html. We are sorry to report that an error has occurred. Internal identifier for doc type not found. Return to RevCom | Return to Web Portal Need help? Email Technical Support. This site managed by the Office of Management / US Department of Energy Directives | Regulations | Technical Standards | Reference Library | DOE Forms | About Us | Privacy & Security Notice This script breaks up the email address to avoid spam

  14. GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2

    SciTech Connect (OSTI)

    Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.

    2009-01-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  15. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. nemgnierrorhandler(): a transaction error ...

  16. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    SciTech Connect (OSTI)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  17. Integrated fiducial sample mount and software for correlated microscopy

    SciTech Connect (OSTI)

    Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge; Karen E. Wright

    2014-02-01

    A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mount methodology allows data to be efficiently and easily collected on different instruments for the same sample location.

  18. runtime error message: "readControlMsg: System returned error...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd"...

  19. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. NERSC and Cray are working on this issue. ...

  20. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    not be a problem. Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. Last edited: 2015-01-16 ...

  1. Modular error embedding

    DOE Patents [OSTI]

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  2. DOE Challenge Home Label Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    October 2012 1 Label Methodology DOE Challenge Home Label Methodology October 2012 DOE Challenge Home October 2012 2 Label Methodology Contents Background ............................................................................................................................................... 3 Methodology ............................................................................................................................................. 5 Comfort/Quiet

  3. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  4. Error 404 - Document not found

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or...

  5. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system resubmit the job error with width parameters to aprun user Make sure #PBS -l mppwidth value matches aprun -n value new values for

  6. Register file soft error recovery

    DOE Patents [OSTI]

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  7. 2008 ASC Methodology Errata

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BONNEVILLE POWER ADMINISTRATION'S ERRATA CORRECTIONS TO THE 2008 AVERAGE SYSTEM COST METHODOLOGY September 12, 2008 I. DESCRIPTION OF ERRATA CORRECTIONS A. Attachment A, ASC...

  8. Draft Tiered Rate Methodology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    For Regional Dialogue Discussion Purposes Only Pre-Decisional Draft Tiered Rates Methodology March 7, 2008 Pre-decisional, Deliberative, For Discussion Purposes Only March 7,...

  9. Confidence limits and their errors

    SciTech Connect (OSTI)

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  10. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect (OSTI)

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  11. Error 404 - Document not found

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or send e-mail to WWW Administrator

  12. Pressure Change Measurement Leak Testing Errors

    SciTech Connect (OSTI)

    Pryor, Jeff M; Walker, William C

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  13. Trouble Shooting and Error Messages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system One of the compute nodes assigned to the job failed. Resubmit the job PtlNIInit failed : PTL_NOT_REGISTERED user The executable is from

  14. error | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    error Sorry, there is no www.netl.doe.gov web page that matches your request. It may be possible that you typed the address incorrectly. Connect to National Energy Technology...

  15. Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity

    Broader source: Energy.gov [DOE]

    Discussed development of a methodology for creating accurate soot models for soot samples from various origins with minimal characterization

  16. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  17. Regional Shelter Analysis Methodology

    SciTech Connect (OSTI)

    Dillon, Michael B.; Dennison, Deborah; Kane, Jave; Walker, Hoyt; Miller, Paul

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b) country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.

  18. Methodology of Internal Assessment of Uncertainty and Extension to Neutron Kinetics/Thermal-Hydraulics Coupled Codes

    SciTech Connect (OSTI)

    Petruzzi, A.; D'Auria, F.; Giannotti, W.; Ivanov, K.

    2005-02-15

    The best-estimate calculation results from complex system codes are affected by approximations that are unpredictable without the use of computational tools that account for the various sources of uncertainty.The code with (the capability of) internal assessment of uncertainty (CIAU) has been previously proposed by the University of Pisa to realize the integration between a qualified system code and an uncertainty methodology and to supply proper uncertainty bands each time a nuclear power plant (NPP) transient scenario is calculated. The derivation of the methodology and the results achieved by the use of CIAU are discussed to demonstrate the main features and capabilities of the method.In a joint effort between the University of Pisa and The Pennsylvania State University, the CIAU method has been recently extended to evaluate the uncertainty of coupled three-dimensional neutronics/thermal-hydraulics calculations. The result is CIAU-TN. The feasibility of the approach has been demonstrated, and sample results related to the turbine trip transient in the Peach Bottom NPP are shown. Notwithstanding that the full implementation and use of the procedure requires a database of errors not available at the moment, the results give an idea of the errors expected from the present computational tools.

  19. DOE Systems Engineering Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Engineering Methodology (SEM) Computer System Retirement Guidelines Version 3 September 2002 U.S. Department of Energy Office of the Chief Information Officer Computer System Retirement Guidelines Date: September 2002 Page 1 Rev Date: Table of Contents Section Page Purpose ............................................................................................................................................ 2 Initiation and Distribution

  20. Selecting the best defect reduction methodology

    SciTech Connect (OSTI)

    Hinckley, C.M.; Barkan, P.

    1994-04-01

    Defect rates less than 10 parts per million, unimaginable a few years ago, have become the standard of world-class quality. To reduce defects, companies are aggressively implementing various quality methodologies, such as Statistical Quality Control Motorola`s Six Sigma, or Shingo`s poka-yok. Although each quality methodology reduces defects, selection has been based on an intuitive sense without understanding their relative effectiveness in each application. A missing link in developing superior defect reduction strategies has been a lack of a general defect model that clarifies the unique focus of each method. Toward the goal of efficient defect reduction, we have developed an event tree which addresses a broad spectrum of quality factors and two defect sources, namely, error and variation. The Quality Control Tree (QCT) predictions are more consistent with production experience than obtained by the other methodologies considered independently. The QCT demonstrates that world-class defect rates cannot be achieved through focusing on a single defect source or quality control factor, a common weakness of many methodologies. We have shown that the most efficient defect reduction strategy depend on the relative strengths and weaknesses of each organization. The QCT can help each organization identify the most promising defect reduction opportunities for achieving its goals.

  1. New Methodology for Natural Gas Production Estimates

    Reports and Publications (EIA)

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  2. Analysis Methodologies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Analysis » Analysis Methodologies Analysis Methodologies A spectrum of analysis methodologies are used in combination to provide a sound understanding of hydrogen and fuel cell systems and developing markets, as follows: Resource Analysis Technological Feasibility and Cost Analysis Environmental Analysis Delivery Analysis Infrastructure Development and Financial Analysis Energy Market Analysis In general, each methodology builds on previous efforts to quantify the benefits, drawbacks,

  3. Methodologies for Reservoir Characterization Using Fluid Inclusion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for ...

  4. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymore » the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs – zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  5. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect (OSTI)

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  6. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-03-01

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  7. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-01-01

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  8. The role of variation, error, and complexity in manufacturing defects

    SciTech Connect (OSTI)

    Hinckley, C.M.; Barkan, P.

    1994-03-01

    Variation in component properties and dimensions is a widely recognized factor in product defects which can be quantified and controlled by Statistical Process Control methodologies. Our studies have shown, however, that traditional statistical methods are ineffective in characterizing and controlling defects caused by error. The distinction between error and variation becomes increasingly important as the target defect rates approach extremely low values. Motorola data substantiates our thesis that defect rates in the range of several parts per million can only be achieved when traditional methods for controlling variation are combined with methods that specifically focus on eliminating defects due to error. Complexity in the product design, manufacturing processes, or assembly increases the likelihood of defects due to both variation and error. Thus complexity is also a root cause of defects. Until now, the absence of a sound correlation between defects and complexity has obscured the importance of this relationship. We have shown that assembly complexity can be quantified using Design for Assembly (DFA) analysis. High levels of correlation have been found between our complexity measures and defect data covering tens of millions of assembly operations in two widely different industries. The availability of an easily determined measure of complexity, combined with these correlations, permits rapid estimation of the relative defect rates for alternate design concepts. This should prove to be a powerful tool since it can guide design improvement at an early stage when concepts are most readily modified.

  9. Error and uncertainty in Raman thermal conductivity measurements

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  10. Error and uncertainty in Raman thermal conductivity measurements

    SciTech Connect (OSTI)

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  11. Field errors in hybrid insertion devices

    SciTech Connect (OSTI)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  12. Protections: Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Protections: Sampling Protections: Sampling Protection #3: Sampling for known and unexpected contaminants August 1, 2013 Monitoring stormwater in Los Alamos Canyon Monitoring stormwater in Los Alamos Canyon The Environmental Sampling Board, a key piece of the Strategy, ensures that LANL collects relevant and appropriate data to answer questions about the protection of human and environmental health, and to satisfy regulatory requirements. LANL must demonstrate the data are technically justified

  13. Trends in Commercial Buildings--Overview

    U.S. Energy Information Administration (EIA) Indexed Site

    Buildings > Commercial Buildings Energy Consumption Survey Survey Methodology Sampling Error, Standard Errors, and Relative Standard Errors The Commercial Buildings Energy...

  14. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect (OSTI)

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  15. Clover: Compiler directed lightweight soft error resilience

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  16. Approximate error conjugation gradient minimization methods

    DOE Patents [OSTI]

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  17. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect (OSTI)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  18. Protections: Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Protection 3: Sampling for known and unexpected contaminants August 1, 2013 Monitoring stormwater in Los Alamos Canyon Monitoring stormwater in Los Alamos Canyon The Environmental ...

  19. Protections: Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and unexpected contaminants August 1, 2013 Monitoring stormwater in Los Alamos Canyon Monitoring stormwater in Los Alamos Canyon The Environmental Sampling Board, a key piece...

  20. Error handling strategies in multiphase inverse modeling

    SciTech Connect (OSTI)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  1. Group representations, error bases and quantum codes

    SciTech Connect (OSTI)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  2. Linux Kernel Error Detection and Correction

    Energy Science and Technology Software Center (OSTI)

    2007-04-11

    EDAC-utils consists fo a library and set of utilities for retrieving statistics from the Linux Kernel Error Detection and Correction (EDAC) drivers.

  3. SAMPLING SYSTEM

    DOE Patents [OSTI]

    Hannaford, B.A.; Rosenberg, R.; Segaser, C.L.; Terry, C.L.

    1961-01-17

    An apparatus is given for the batch sampling of radioactive liquids such as slurries from a system by remote control, while providing shielding for protection of operating personnel from the harmful effects of radiation.

  4. Sampling box

    DOE Patents [OSTI]

    Phillips, Terrance D.; Johnson, Craig

    2000-01-01

    An air sampling box that uses a slidable filter tray and a removable filter cartridge to allow for the easy replacement of a filter which catches radioactive particles is disclosed.

  5. runtime error message: "readControlMsg: System returned error Connection

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    timed out on TCP socket fd" readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd" June 30, 2015 Symptom User jobs with sinlge or multiple apruns in a batch script may get this run time error: "readControlMsg: System returned error Connection timed out on TCP socket fd". This problem is intermittent, sometimes resubmit works. This error

  6. WRAP Module 1 sampling and analysis plan

    SciTech Connect (OSTI)

    Mayancsik, B.A.

    1995-03-24

    This document provides the methodology to sample, screen, and analyze waste generated, processed, or otherwise the responsibility of the Waste Receiving and Processing Module 1 facility. This includes Low-Level Waste, Transuranic Waste, Mixed Waste, and Dangerous Waste.

  7. Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140; Preprint

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2006-07-01

    Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140 (ANSI/ASHRAE 2001, 2004), Method of Test for the Evaluation of Building Energy Analysis Computer Programs. A summary of the method is included in the ASHRAE Handbook of Fundamentals (ASHRAE 2005). This paper describes the ANSI/ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to Standard 140 and related research recommendations.

  8. Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2006-01-01

    Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2001a, 2004). A summary of the method is included in the 2005 ASHRAE Handbook--Fundamentals (ASHRAE 2005). This paper describes the ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to ASHRAE Standard 140 and related research recommendations.

  9. SAMPLING OSCILLOSCOPE

    DOE Patents [OSTI]

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  10. Sampling apparatus

    DOE Patents [OSTI]

    Gordon, Norman R.; King, Lloyd L.; Jackson, Peter O.; Zulich, Alan W.

    1989-01-01

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface.

  11. Sampling apparatus

    DOE Patents [OSTI]

    Gordon, N.R.; King, L.L.; Jackson, P.O.; Zulich, A.W.

    1989-07-18

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface. 15 figs.

  12. eGallon-methodology-final

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    traditional gallon of unleaded fuel -- the dominant fuel choice for vehicles in the U.S. eGallon Methodology The eGallon is measured as an "implicit" cost of a gallon of gasoline. ...

  13. Weekly Coal Production Estimation Methodology

    Gasoline and Diesel Fuel Update (EIA)

    Weekly Coal Production Estimation Methodology Step 1 (Estimate total amount of weekly U.S. coal production) U.S. coal production for the current week is estimated using a ratio ...

  14. Verification of unfold error estimates in the unfold operator code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  15. Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Milligan, M.

    2011-07-01

    This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.

  16. Error recovery to enable error-free message transfer between nodes of a computer network

    DOE Patents [OSTI]

    Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.

    2016-01-26

    An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.

  17. Quantum error-correcting codes and devices

    DOE Patents [OSTI]

    Gottesman, Daniel

    2000-10-03

    A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.

  18. Chemical incident economic impact analysis methodology. (Technical...

    Office of Scientific and Technical Information (OSTI)

    Chemical incident economic impact analysis methodology. Citation Details In-Document Search Title: Chemical incident economic impact analysis methodology. You are accessing a ...

  19. Measuring the Impact of Benchmarking & Transparency - Methodologies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example ...

  20. Evaluating operating system vulnerability to memory errors.

    SciTech Connect (OSTI)

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  1. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  2. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect (OSTI)

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect (OSTI)

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  4. Measurement of laminar burning speeds and Markstein lengths using a novel methodology

    SciTech Connect (OSTI)

    Tahtouh, Toni; Halter, Fabien; Mounaim-Rousselle, Christine [Institut PRISME, Universite d'Orleans, 8 rue Leonard de Vinci-45072, Orleans Cedex 2 (France)

    2009-09-15

    Three different methodologies used for the extraction of laminar information are compared and discussed. Starting from an asymptotic analysis assuming a linear relation between the propagation speed and the stretch acting on the flame front, temporal radius evolutions of spherically expanding laminar flames are postprocessed to obtain laminar burning velocities and Markstein lengths. The first methodology fits the temporal radius evolution with a polynomial function, while the new methodology proposed uses the exact solution of the linear relation linking the flame speed and the stretch as a fit. The last methodology consists in an analytical resolution of the problem. To test the different methodologies, experiments were carried out in a stainless steel combustion chamber with methane/air mixtures at atmospheric pressure and ambient temperature. The equivalence ratio was varied from 0.55 to 1.3. The classical shadowgraph technique was used to detect the reaction zone. The new methodology has proven to be the most robust and provides the most accurate results, while the polynomial methodology induces some errors due to the differentiation process. As original radii are used in the analytical methodology, it is more affected by the experimental radius determination. Finally, laminar burning velocity and Markstein length values determined with the new methodology are compared with results reported in the literature. (author)

  5. Neutron multiplication error in TRU waste measurements

    SciTech Connect (OSTI)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  6. Superdense coding interleaved with forward error correction

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  7. Laser Phase Errors in Seeded FELs

    SciTech Connect (OSTI)

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  8. Energy Efficiency Indicators Methodology Booklet

    SciTech Connect (OSTI)

    Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

    2010-05-01

    This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

  9. Table 1b. Relative Standard Errors for Effective, Occupied, and...

    U.S. Energy Information Administration (EIA) Indexed Site

    b.Relative Standard Errors Table 1b. Relative Standard Errors for Effective Occupied, and Vacant Square Footage, 1992 Building Characteristics All Buildings (thousand) Total...

  10. Accounting for Model Error in the Calibration of Physical Models...

    Office of Scientific and Technical Information (OSTI)

    Accounting for Model Error in the Calibration of Physical Models. Citation Details In-Document Search Title: Accounting for Model Error in the Calibration of Physical Models. ...

  11. Table 2b. Relative Standard Errors for Electricity Consumption...

    U.S. Energy Information Administration (EIA) Indexed Site

    2b. Relative Standard Errors for Electricity Table 2b. Relative Standard Errors for Electricity Consumption and Electricity Intensities, per Square Foot, Specific to Occupied and...

  12. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...

  13. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...

  14. V-235: Cisco Mobility Services Engine Configuration Error Lets...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login ...

  15. Error Estimation for Fault Tolerance in Numerical Integration...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Error Estimation for Fault Tolerance in Numerical Integration Solvers Event Sponsor: ... In numerical integration solvers, approximation error can be estimated at a low cost. We ...

  16. A posteriori error analysis of parameterized linear systems using...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A posteriori error analysis of parameterized linear systems using spectral methods. Citation Details In-Document Search Title: A posteriori error analysis of ...

  17. Raman Thermometry: Comparing Methods to Minimize Error. (Conference...

    Office of Scientific and Technical Information (OSTI)

    Raman Thermometry: Comparing Methods to Minimize Error. Citation Details In-Document Search Title: Raman Thermometry: Comparing Methods to Minimize Error. Abstract not provided....

  18. Intel C++ compiler error: stl_iterator_base_types.h

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    C++ compiler error: stliteratorbasetypes.h Intel C++ compiler error: stliteratorbasetypes.h December 7, 2015 by Scott French Because the system-supplied version of GCC is...

  19. Error estimates for fission neutron outputs (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Error estimates for fission neutron outputs Citation Details In-Document Search Title: Error estimates for fission neutron outputs You are accessing a document from the...

  20. Internal compiler error for function pointer with identically...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott...

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  2. eGallon Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    eGallon Methodology eGallon Methodology The average American measures the day-to-day cost of driving by the price of a gallon of gasoline. In other words, as the price of gasoline ...

  3. WIPP Weatherization: Common Errors and Innovative Solutions Presentation

    Broader source: Energy.gov [DOE]

    This presentation contains information on WIPP Weatherization: Common Errors and Innovative Solutions.

  4. Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)

    SciTech Connect (OSTI)

    Hodge, B. M.; Ela, E.; Milligan, M.

    2011-10-01

    This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.

  5. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect (OSTI)

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  6. Energy Intensity Indicators: Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodology Energy Intensity Indicators: Methodology The files listed below contain methodology documentation and related studies that support the information presented on this website. The files are available to view and/or download as Adobe Acrobat PDF files. 2003. Energy Indicators System: Index Construction Methodology 2004. Changing the Base Year for the Index Boyd GA, and JM Roop. 2004. "A Note on the Fisher Ideal Index Decomposition for Structural Change in Energy Intensity."

  7. Siting Methodologies for Hydrokinetics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Siting Methodologies for Hydrokinetics Siting Methodologies for Hydrokinetics Report that provides an overview of the federal and state regulatory framework for hydrokinetic projects. siting_handbook_2009.pdf (2.43 MB) More Documents & Publications Siting Methodologies for Hydrokinetics EIS-0488: Final Environmental Impact Statement EIS-0493: Draft Environmental Impact Statement

  8. Errors in response calculations for beams

    SciTech Connect (OSTI)

    Wada, H.; Wurburton, G.B.

    1985-05-01

    When the finite element method is used to idealize a structure, its dynamic response can be determined from the governing matrix equation by the normal mode method or by one of the many approximate direct integration methods. In either method the approximate data of the finite element idealization are used, but further assumptions are introduced by the direct integration scheme. It is the purpose of this paper to study these errors for a simple structure. The transient flexural vibrations of a uniform cantilever beam, which is subjected to a transverse force at the free end, are determined by the Laplace transform method. Comparable responses are obtained for a finite element idealization of the beam, using the normal mode and Newmark average acceleration methods; the errors associated with the approximate methods are studied. If accuracy has priority and the quantity of data is small, the normal mode method is recommended; however, if the quantity of data is large, the Newmark method is useful.

  9. Detecting Soft Errors in Stencil based Computations

    SciTech Connect (OSTI)

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  10. Redundancy and Error Resilience in Boolean Networks

    SciTech Connect (OSTI)

    Peixoto, Tiago P.

    2010-01-29

    We consider the effect of noise in sparse Boolean networks with redundant functions. We show that they always exhibit a nonzero error level, and the dynamics undergoes a phase transition from nonergodicity to ergodicity, as a function of noise, after which the system is no longer capable of preserving a memory of its initial state. We obtain upper bounds on the critical value of noise for networks of different sparsity.

  11. Systematic errors in long baseline oscillation experiments

    SciTech Connect (OSTI)

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  12. An Optimized Autoregressive Forecast Error Generator for Wind and Load Uncertainty Study

    SciTech Connect (OSTI)

    De Mello, Phillip; Lu, Ning; Makarov, Yuri V.

    2011-01-17

    This paper presents a first-order autoregressive algorithm to generate real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast errors. The methodology aims at producing random wind and load forecast time series reflecting the autocorrelation and cross-correlation of historical forecast data sets. Five statistical characteristics are considered: the means, standard deviations, autocorrelations, and cross-correlations. A stochastic optimization routine is developed to minimize the differences between the statistical characteristics of the generated time series and the targeted ones. An optimal set of parameters are obtained and used to produce the RT, HA, and DA forecasts in due order of succession. This method, although implemented as the first-order regressive random forecast error generator, can be extended to higher-order. Results show that the methodology produces random series with desired statistics derived from real data sets provided by the California Independent System Operator (CAISO). The wind and load forecast error generator is currently used in wind integration studies to generate wind and load inputs for stochastic planning processes. Our future studies will focus on reflecting the diurnal and seasonal differences of the wind and load statistics and implementing them in the random forecast generator.

  13. Improving Memory Error Handling Using Linux

    SciTech Connect (OSTI)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    2014-07-25

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  14. Common Errors and Innovative Solutions Transcript | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Common Errors and Innovative Solutions Transcript Common Errors and Innovative Solutions Transcript An example of case studies, mainly by showing photos of errors and good examples, then discussing the purpose of the home energy professional guidelines and certification. There may be more examples of what not to do only because these were good learning opportunities. common_errors_innovative_solutions.doc (41.5 KB) More Documents & Publications WIPP Weatherization: Common Errors and

  15. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; Vogelmann, Andrew M.

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  16. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    SciTech Connect (OSTI)

    Zanolin, M.; Vitale, S.; Makris, N.

    2010-06-15

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

  17. Simulation Enabled Safeguards Assessment Methodology

    SciTech Connect (OSTI)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  18. Methodology for flammable gas evaluations

    SciTech Connect (OSTI)

    Hopkins, J.D., Westinghouse Hanford

    1996-06-12

    There are 177 radioactive waste storage tanks at the Hanford Site. The waste generates flammable gases. The waste releases gas continuously, but in some tanks the waste has shown a tendency to trap these flammable gases. When enough gas is trapped in a tank`s waste matrix, it may be released in a way that renders part or all of the tank atmosphere flammable for a period of time. Tanks must be evaluated against previously defined criteria to determine whether they can present a flammable gas hazard. This document presents the methodology for evaluating tanks in two areas of concern in the tank headspace:steady-state flammable-gas concentration resulting from continuous release, and concentration resulting from an episodic gas release.

  19. Simulation enabled safeguards assessment methodology

    SciTech Connect (OSTI)

    Bean, Robert; Bjornard, Trond; Larson, Tom

    2007-07-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wire-frame construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed. (authors)

  20. Verification of unfold error estimates in the UFO code

    SciTech Connect (OSTI)

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  1. Methodology for Estimating Solar Potential on Multiple Building Rooftops for Photovoltaic Systems

    SciTech Connect (OSTI)

    Kodysh, Jeffrey B; Omitaomu, Olufemi A; Bhaduri, Budhendra L; Neish, Bradley S

    2013-01-01

    In this paper, a methodology for estimating solar potential on multiple building rooftops is presented. The objective of this methodology is to estimate the daily or monthly solar radiation potential on individual buildings in a city/region using Light Detection and Ranging (LiDAR) data and a geographic information system (GIS) approach. Conceptually, the methodology is based on the upward-looking hemispherical viewshed algorithm, but applied using an area-based modeling approach. The methodology considers input parameters, such as surface orientation, shadowing effect, elevation, and atmospheric conditions, that influence solar intensity on the earth s surface. The methodology has been implemented for some 212,000 buildings in Knox County, Tennessee, USA. Based on the results obtained, the methodology seems to be adequate for estimating solar radiation on multiple building rooftops. The use of LiDAR data improves the radiation potential estimates in terms of the model predictive error and the spatial pattern of the model outputs. This methodology could help cities/regions interested in sustainable projects to quickly identify buildings with higher potentials for roof-mounted photovoltaic systems.

  2. Spectral Characteristics of Background Error Covariance and Multiscale Data Assimilation: Background Error Covariance and Multiscale Data Assimilation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Cheng, Xiaoping; Gustafson, William I.; Vogelmann, Andrew M.

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  3. Methodology for EIA Weekly Underground Natural Gas Storage Estimates

    Weekly Natural Gas Storage Report (EIA)

    Methodology for EIA Weekly Underground Natural Gas Storage Estimates Latest Update: November 16, 2015 This report consists of the following sections: Survey and Survey Processing - a description of the survey and an overview of the program Sampling - a description of the selection process used to identify companies in the survey Estimation - how the regional estimates are prepared from the collected data Computing the Five-year Averages, Maxima, Minima, and Year-Ago Values for the Weekly Natural

  4. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    SciTech Connect (OSTI)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-02-27

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Salmon, Mississippi, Site, Water Sampling Location Map .........5 Water Sampling Field Activities Verification ...

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........1 Water Sampling Locations at the Rulison, .........3 Water Sampling Field Activities Verification ...

  7. Error Reduction in Weigh-In-Motion

    Energy Science and Technology Software Center (OSTI)

    2007-09-21

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bounding and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with lessmore » effort (elimination of redundant weighing)« less

  8. Error Reduction for Weigh-In-Motion

    SciTech Connect (OSTI)

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  9. Waste Package Design Methodology Report

    SciTech Connect (OSTI)

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  10. Seismic Fracture Characterization Methodologies for Enhanced Geothermal

    Office of Scientific and Technical Information (OSTI)

    Systems (Technical Report) | SciTech Connect Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems Citation Details In-Document Search Title: Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems Executive Summary The overall objective of this work was the development of surface and borehole seismic methodologies using both compressional and shear waves for characterizing faults and fractures in Enhanced Geothermal Systems. We used both

  11. Methodology for Validating Building Energy Analysis Simulations

    SciTech Connect (OSTI)

    Judkoff, R.; Wortman, D.; O'Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  12. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single paththe shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

  13. MPI errors from cray-mpich/7.3.0

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MPI errors from cray-mpich7.3.0 MPI errors from cray-mpich7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping...

  14. Resolved: "error while loading shared libraries: libalpslli.so...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on...

  15. Siting Methodologies for Hydrokinetics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Siting Methodologies for Hydrokinetics Report that provides an overview of the federal and state regulatory framework for hydrokinetic projects. PDF icon sitinghandbook2009.pdf ...

  16. Development of Nonlinear SSI Time Domain Methodology

    Broader source: Energy.gov [DOE]

    Development of Nonlinear SSI Time Domain Methodology Justin Coleman, P.E. Nuclear Science and Technology Idaho National Laboratory October 22, 2014

  17. Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Consumption Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology to Identify Potential Reductions in Utility and Process Energy Consumption This case ...

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Groundwater and Surface Water Sampling at the Slick Rock, Colorado, Processing Sites .........7 Water Sampling Field Activities Verification ...

  19. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Green River, Utah, Disposal Site August 2014 LMSGRN.........7 Water Sampling Field Activities Verification ...

  20. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and May 2014 Groundwater and Surface Water Sampling at the Shiprock, New Mexico, Disposal .........9 Water Sampling Field Activities Verification ...

  1. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Rio Blanco, Colorado, Site October 2014 LMSRBLS00514 .........5 Water Sampling Field Activities Verification ...

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Natural Gas and Produced Water Sampling at the Rulison, Colorado, Site November 2014 LMS.........3 Water Sampling Field Activities Verification ...

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Groundwater and Surface Water Sampling at the Rulison, Colorado, Site October 2015 LMS.........5 Water Sampling Field Activities Verification ...

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Monticello, Utah, Processing Site July 2015 LMSMNT.........7 Water Sampling Field Activities Verification ...

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    2015 Groundwater and Surface Water Sampling at the Shiprock, New Mexico, Disposal Site .........9 Water Sampling Field Activities Verification ...

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Rio Blanco, Colorado, Site October 2015 LMSRBLS00515 .........5 Water Sampling Field Activities Verification ...

  7. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Produced Water Sampling at the Rulison, Colorado, Site May 2015 LMSRULS00115 Available .........3 Water Sampling Field Activities Verification ...

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Natural Gas and Produced Water Sampling at the Gasbuggy, New Mexico, Site December 2013 .........5 Water Sampling Field Activities Verification ...

  9. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Produced Water Sampling at the Rulison, Colorado, Site January 2016 LMSRULS00915 .........3 Water Sampling Field Activities Verification ...

  10. September 2004 Water Sampling

    Office of Legacy Management (LM)

    3 Groundwater and Surface Water Sampling at the Monument Valley, Arizona, Processing Site .........7 Water Sampling Field Activities Verification ...

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    July 2015 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing .........5 Water Sampling Field Activities Verification ...

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Monticello, Utah, Processing Site July 2014 LMSMNT.........7 Water Sampling Field Activities Verification ...

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    3 Water Sampling at the Monticello, Utah, Processing Site January 2014 LMSMNTS01013 This .........7 Water Sampling Field Activities Verification ...

  14. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Naturita, Colorado Processing Site October 2013 LMSNAP.........5 Water Sampling Field Activities Verification ...

  15. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing Site .........5 Water Sampling Field Activities Verification ...

  16. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Tuba City, Arizona, Disposal Site November 2013 LMSTUB.........9 Water Sampling Field Activities Verification ...

  17. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Groundwater and Surface Water Sampling at the Monticello, Utah, Processing Site January .........7 Water Sampling Field Activities Verification ...

  18. Cyber-Informed Engineering: The Need for a New Risk Informed and Design Methodology

    SciTech Connect (OSTI)

    Price, Joseph Daniel; Anderson, Robert Stephen

    2015-06-01

    Current engineering and risk management methodologies do not contain the foundational assumptions required to address the intelligent adversary’s capabilities in malevolent cyber attacks. Current methodologies focus on equipment failures or human error as initiating events for a hazard, while cyber attacks use the functionality of a trusted system to perform operations outside of the intended design and without the operator’s knowledge. These threats can by-pass or manipulate traditionally engineered safety barriers and present false information, invalidating the fundamental basis of a safety analysis. Cyber threats must be fundamentally analyzed from a completely new perspective where neither equipment nor human operation can be fully trusted. A new risk analysis and design methodology needs to be developed to address this rapidly evolving threatscape.

  19. Locked modes and magnetic field errors in MST

    SciTech Connect (OSTI)

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.

  20. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect (OSTI)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  1. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, Johann; Koren, Yoram

    1993-08-24

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the "confidence" of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  2. Error-eliminating rapid ultrasonic firing

    DOE Patents [OSTI]

    Borenstein, J.; Koren, Y.

    1993-08-24

    A system for producing reliable navigation data for a mobile vehicle, such as a robot, combines multiple range samples to increase the confidence'' of the algorithm in the existence of an obstacle. At higher vehicle speed, it is crucial to sample each sensor quickly and repeatedly to gather multiple samples in time to avoid a collision. Erroneous data is rejected by delaying the issuance of an ultrasonic energy pulse by a predetermined wait-period, which may be different during alternate ultrasonic firing cycles. Consecutive readings are compared, and the corresponding data is rejected if the readings differ by more than a predetermined amount. The rejection rate for the data is monitored and the operating speed of the navigation system is reduced if the data rejection rate is increased. This is useful to distinguish and eliminate noise from the data which truly represents the existence of an article in the field of operation of the vehicle.

  3. Culture, and a Metrics Methodology for Biological Countermeasure Scenarios

    SciTech Connect (OSTI)

    Simpson, Mary J.

    2007-03-15

    Outcome Metrics Methodology defines a way to evaluate outcome metrics associated with scenario analyses related to biological countermeasures. Previous work developed a schema to allow evaluation of common elements of impacts across a wide range of potential threats and scenarios. Classes of metrics were identified that could be used by decision makers to differentiate the common bases among disparate scenarios. Typical impact metrics used in risk calculations include the anticipated number of deaths, casualties, and the direct economic costs should a given event occur. There are less obvious metrics that are often as important and require more intensive initial work to be incorporated. This study defines a methodology for quantifying, evaluating, and ranking metrics other than direct health and economic impacts. As has been observed with the consequences of Hurricane Katrina, impacts to the culture of specific sectors of society are less obvious on an immediate basis but equally important over the ensuing and long term. Culture is used as the example class of metrics within which • requirements for a methodology are explored • likely methodologies are examined • underlying assumptions for the respective methodologies are discussed • the basis for recommending a specific methodology is demonstrated. Culture, as a class of metrics, is shown to consist of political, sociological, and psychological elements that are highly valued by decision makers. In addition, cultural practices, dimensions, and kinds of knowledge offer complementary sets of information that contribute to the context within which experts can provide input. The quantification and evaluation of sociopolitical, socio-economic, and sociotechnical impacts depend predominantly on subjective, expert judgment. Epidemiological data is limited, resulting in samples with statistical limits. Dose response assessments and curves depend on the quality of data and its relevance to human modes of exposure

  4. A technique for human error analysis (ATHEANA)

    SciTech Connect (OSTI)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  5. LCLS Sample Preparation Laboratory | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LCLS Sample Preparation Laboratory Kayla Zimmerman | (650) 926-6281 Lisa Hammon, LCLS Lab Coordinator Welcome to the LCLS Sample Preparation Laboratory. This small general use wet lab is located in Rm 109 of the Far Experimental Hall near the MEC, CXI, and XCS hutches. It conveniently serves all LCLS hutches and is available for final stage sample preparation. Due to space limitations, certain types of activities may be restricted and all access must be scheduled in advance. User lab bench

  6. Particle Measurement Methodology: Comparison of On-road and Lab Diesel

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Particle Size Distributions | Department of Energy Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions 2002 DEER Conference Presentation: University of Minnesota 2002_deer_kittelson2.pdf (360.23 KB) More Documents & Publications Gasoline Vehicle Exhuast Particle Sampling Study Nanoparticle Emissions from Internal Combustion Engines Review of Diesel

  7. Photovoltaic module energy rating methodology development

    SciTech Connect (OSTI)

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L.; Whitaker, C.; Newmiller, J.

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  8. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect (OSTI)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  9. A method for the quantification of model form error associated with physical systems.

    SciTech Connect (OSTI)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  10. NSD Methodology Report | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    NSD Methodology Report NSDMethodologyReport.pdf (4.46 MB) More Documents & Publications New Stream-reach Development (NSD) Final Report and Fact Sheet An Assessment of Energy ...

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... Inductively Coupled Plasma (ICP) Interference Check Sample (ICS) Analysis ICP interference check samples ICSA and ICSAB were analyzed at the required frequency to verify the ...

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    5 Groundwater and Surface Water Sampling at the Tuba City, Arizona Disposal Site June 2015 .........7 Water Sampling Field Activities Verification ...

  13. Visual Sample Plan Flyer

    Office of Energy Efficiency and Renewable Energy (EERE)

    This flyer better explains that VSP is a free, easy-to-use software tool that supports development of optimal sampling plans based on statistical sampling theory.

  14. Methodology for Monthly Crude Oil Production Estimates

    U.S. Energy Information Administration (EIA) Indexed Site

    015 U.S. Energy Information Administration | Methodology for Monthly Crude Oil Production Estimates 1 Methodology for Monthly Crude Oil Production Estimates Executive summary The U.S. Energy Information Administration (EIA) relies on data from state and other federal agencies and does not currently collect survey data directly from crude oil producers. Summarizing the estimation process in terms of percent of U.S. production: * 20% is based on state agency data, including North Dakota and

  15. Internal compiler error for function pointer with identically named

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    arguments Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott French, NERSC USG Status: Bug 21435 reported to PGI For pgcc versions after 12.x (up through 12.9 is fine, but 13.x and 14.x are not), you may observe an internal compiler error associated with function pointer prototypes when named arguments are used. Specifically, if a function pointer type is defined

  16. Polaractivation for classical zero-error capacity of qudit channels

    SciTech Connect (OSTI)

    Gyongyosi, Laszlo; Imre, Sandor

    2014-12-04

    We introduce a new phenomenon for zero-error transmission of classical information over quantum channels that initially were not able for zero-error classical communication. The effect is called polaractivation, and the result is similar to the superactivation effect. We use the Choi-Jamiolkowski isomorphism and the Schmidt-theorem to prove the polaractivation of classical zero-error capacity and define the polaractivator channel coding scheme.

  17. A design methodology for unattended monitoring systems

    SciTech Connect (OSTI)

    SMITH,JAMES D.; DELAND,SHARON M.

    2000-03-01

    The authors presented a high-level methodology for the design of unattended monitoring systems, focusing on a system to detect diversion of nuclear materials from a storage facility. The methodology is composed of seven, interrelated analyses: Facility Analysis, Vulnerability Analysis, Threat Assessment, Scenario Assessment, Design Analysis, Conceptual Design, and Performance Assessment. The design of the monitoring system is iteratively improved until it meets a set of pre-established performance criteria. The methodology presented here is based on other, well-established system analysis methodologies and hence they believe it can be adapted to other verification or compliance applications. In order to make this approach more generic, however, there needs to be more work on techniques for establishing evaluation criteria and associated performance metrics. They found that defining general-purpose evaluation criteria for verifying compliance with international agreements was a significant undertaking in itself. They finally focused on diversion of nuclear material in order to simplify the problem so that they could work out an overall approach for the design methodology. However, general guidelines for the development of evaluation criteria are critical for a general-purpose methodology. A poor choice in evaluation criteria could result in a monitoring system design that solves the wrong problem.

  18. Review and evaluation of paleohydrologic methodologies

    SciTech Connect (OSTI)

    Foley, M.G.; Zimmerman, D.A.; Doesburg, J.M.; Thorne, P.D.

    1982-12-01

    A literature review was conducted to identify methodologies that could be used to interpret paleohydrologic environments. Paleohydrology is the study of past hydrologic systems or of the past behavior of an existing hydrologic system. The purpose of the review was to evaluate how well these methodologies could be applied to the siting of low-level radioactive waste facilities. The computer literature search queried five bibliographical data bases containing over five million citations of technical journals, books, conference papers, and reports. Two data-base searches (United States Geological Survey - USGS) and a manual search were also conducted. The methodologies were examined for data requirements and sensitivity limits. Paleohydrologic interpretations are uncertain because of the effects of time on hydrologic and geologic systems and because of the complexity of fluvial systems. Paleoflow determinations appear in many cases to be order-of-magnitude estimates. However, the methodologies identified in this report mitigate this uncertainty when used collectively as well as independently. That is, the data from individual methodologies can be compared or combined to corroborate hydrologic predictions. In this manner, paleohydrologic methodologies are viable tools to assist in evaluating the likely future hydrology of low-level radioactive waste sites.

  19. Platform-Independent Method for Detecting Errors in Metagenomic...

    Office of Scientific and Technical Information (OSTI)

    Title: Platform-Independent Method for Detecting Errors in Metagenomic Sequencing Data: DRISEE Authors: Keegan, K. P. ; Trimble, W. L. ; Wilkening, J. ; Wilke, A. ; Harrison, T. ; ...

  20. Detecting and correcting hard errors in a memory array

    DOE Patents [OSTI]

    Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.

    2015-11-19

    Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.

  1. Info-Gap Analysis of Truncation Errors in Numerical Simulations...

    Office of Scientific and Technical Information (OSTI)

    Title: Info-Gap Analysis of Truncation Errors in Numerical Simulations. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. ...

  2. Info-Gap Analysis of Numerical Truncation Errors. (Conference...

    Office of Scientific and Technical Information (OSTI)

    Title: Info-Gap Analysis of Numerical Truncation Errors. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. Publication ...

  3. Table 6b. Relative Standard Errors for Total Electricity Consumption...

    U.S. Energy Information Administration (EIA) Indexed Site

    b. Relative Standard Errors for Total Electricity Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Electricity (thousand) Total...

  4. Confirmation of standard error analysis techniques applied to...

    Office of Scientific and Technical Information (OSTI)

    reported parameter errors are not reliable in many EXAFS studies in the literature. ... Country of Publication: United States Language: English Subject: 75; ABSORPTION; ACCURACY; ...

  5. WIPP Weatherization: Common Errors and Innovative Solutions Presentati...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    More Documents & Publications Common Errors and Innovative Solutions Transcript Building ... America Best Practices Series: Volume 12. Energy Renovations-Insulation: A Guide for ...

  6. Output-Based Error Estimation and Adaptation for Uncertainty...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Output-Based Error Estimation and Adaptation for Uncertainty Quantification Isaac M. Asher and Krzysztof J. Fidkowski University of Michigan US National Congress on Computational...

  7. U-058: Apache Struts Conversion Error OGNL Expression Injection...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    in Apache Struts. A remote user can execute arbitrary commands on the target system. PLATFORM: Apache Struts 2.x ABSTRACT: Apache Struts Conversion Error OGNL Expression...

  8. Accounting for Model Error in the Calibration of Physical Models

    Office of Scientific and Technical Information (OSTI)

    ... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...

  9. Handling Model Error in the Calibration of Physical Models

    Office of Scientific and Technical Information (OSTI)

    ... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...

  10. Development of a statistically based access delay timeline methodology.

    SciTech Connect (OSTI)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  11. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    SciTech Connect (OSTI)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors.

    To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s−1) and errors in the vertical velocity measurement

  12. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s−1) and errors in the vertical velocity measurement exceed the actual

  13. Scheme for precise correction of orbit variation caused by dipole error field of insertion device

    SciTech Connect (OSTI)

    Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.

    2005-05-15

    We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.

  14. Accuracy of the European solar water heater test procedure. Part 1: Measurement errors and parameter estimates

    SciTech Connect (OSTI)

    Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.

    1991-01-01

    The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.

  15. Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry

    SciTech Connect (OSTI)

    Dilley, Lorie M.

    2015-04-13

    The purpose of this project was to: 1) evaluate the relationship between geothermal fluid processes and the compositions of the fluid inclusion gases trapped in the reservoir rocks; and 2) develop methodologies for interpreting fluid inclusion gas data in terms of the chemical, thermal and hydrological properties of geothermal reservoirs. Phase 1 of this project was designed to conduct the following: 1) model the effects of boiling, condensation, conductive cooling and mixing on selected gaseous species; using fluid compositions obtained from geothermal wells, 2) evaluate, using quantitative analyses provided by New Mexico Tech (NMT), how these processes are recorded by fluid inclusions trapped in individual crystals; and 3) determine if the results obtained on individual crystals can be applied to the bulk fluid inclusion analyses determined by Fluid Inclusion Technology (FIT). Our initial studies however, suggested that numerical modeling of the data would be premature. We observed that the gas compositions, determined on bulk and individual samples were not the same as those discharged by the geothermal wells. Gases discharged from geothermal wells are CO2-rich and contain low concentrations of light gases (i.e. H2, He, N, Ar, CH4). In contrast many of our samples displayed enrichments in these light gases. Efforts were initiated to evaluate the reasons for the observed gas distributions. As a first step, we examined the potential importance of different reservoir processes using a variety of commonly employed gas ratios (e.g. Giggenbach plots). The second technical target was the development of interpretational methodologies. We have develop methodologies for the interpretation of fluid inclusion gas data, based on the results of Phase 1, geologic interpretation of fluid inclusion data, and integration of the data. These methodologies can be used in conjunction with the relevant geological and hydrological information on the system to

  16. Two-stage sampling for acceptance testing

    SciTech Connect (OSTI)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  17. Two-stage sampling for acceptance testing

    SciTech Connect (OSTI)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Equipment Blank Data ...

  19. Fluid sampling tool

    DOE Patents [OSTI]

    Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.

    2000-01-01

    A fluid-sampling tool for obtaining a fluid sample from a container. When used in combination with a rotatable drill, the tool bores a hole into a container wall, withdraws a fluid sample from the container, and seals the borehole. The tool collects fluid sample without exposing the operator or the environment to the fluid or to wall shavings from the container.

  20. The Sample Preparation Laboratories | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Cynthia Patty 1 Sam Webb 2 John Bargar 3 Arizona 4 Chemicals 5 Team Work 6 Bottles 7 Glass 8 Plan Ahead! See the tabs above for Laboratory Access and forms you'll need to complete. Equipment and Chemicals tabs detail resources already available on site. Avoid delays! Hazardous materials use may require a written Standard Operating Procedure (SOP) before you work. Check the Chemicals tab for more information. The Sample Preparation Laboratories The Sample Preparation Laboratories provide wet lab

  1. Hydrologic characterization of fractured rocks: An interdisciplinary methodology

    SciTech Connect (OSTI)

    Long, J.C.S.; Majer, E.L.; Martel, S.J.; Karasaki, K.; Peterson, J.E. Jr.; Davey, A.; Hestir, K. )

    1990-11-01

    The characterization of fractured rock is a critical problem in the development of nuclear waste repositories in geologic media. A good methodology for characterizing these systems should be focused on the large important features first and concentrate on building numerical models which can reproduce the observed hydrologic behavior of the fracture system. In many rocks, fracture zones dominate the behavior. These can be described using the tools of geology and geomechanics in order to understand what kind of features might be important hydrologically and to qualitatively describe the way flow might occur in the rock. Geophysics can then be employed to locate these features between boreholes. Then well testing can be used to see if the identified features are in fact important. Given this information, a conceptual model of the system can be developed which honors the geologic description, the tomographic data and the evidence of high permeability. Such a model can then be modified through an inverse process, such as simulated annealing, until it reproduces the cross-hole well test behavior which has been observed insitu. Other possible inversion techniques might take advantage of self similar structure. Once a model is constructed, we need to see how well the model makes predictions. We can use a cross-validation technique which sequentially puts aside parts of the data and uses the model to predict that part in order to calculate the prediction error. This approach combines many types of information in a methodology which can be modified to fit a particular field site. 114 refs., 81 figs., 7 tabs.

  2. Critical infrastructure systems of systems assessment methodology.

    SciTech Connect (OSTI)

    Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

    2006-10-01

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

  3. Error localization in RHIC by fitting difference orbits

    SciTech Connect (OSTI)

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  4. Rain sampling device

    DOE Patents [OSTI]

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  5. Rain sampling device

    DOE Patents [OSTI]

    Nelson, Danny A.; Tomich, Stanley D.; Glover, Donald W.; Allen, Errol V.; Hales, Jeremy M.; Dana, Marshall T.

    1991-01-01

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  6. Particle Measurement Methodology: Comparison of On-road and Lab...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size ...

  7. Evaluation of the European PMP Methodologies Using Chassis Dynamometer...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    the European PMP Methodologies Using Chassis Dynamometer and On-road Testing of Heavy-duty Vehicles Evaluation of the European PMP Methodologies Using Chassis Dynamometer and ...

  8. Modeling of Diesel Exhaust Systems: A methodology to better simulate...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Discussed ...

  9. Biopower Report Presents Methodology for Assessing the Value...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of Co-Firing...

  10. Validation of Hydrogen Exchange Methodology on Molecular Sieves...

    Office of Environmental Management (EM)

    Validation of Hydrogen Exchange Methodology on Molecular Sieves for Tritium Removal from Contaminated Water Validation of Hydrogen Exchange Methodology on Molecular Sieves for ...

  11. Seismic hazard methodology for the Central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    Central and Eastern United States: Volume 1: Part 2, Methodology (Revision 1): Final report Citation Details In-Document Search Title: Seismic hazard methodology for the Central ...

  12. Seismic hazard methodology for the central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    Title: Seismic hazard methodology for the central and Eastern United States: Volume 1, Part 1: Theory: Final report The NRC staff concludes that SOGEPRI Seismic Hazard Methodology...

  13. A Proposed Methodology to Determine the Leverage Impacts of Technology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment ...

  14. Science-based MEMS reliability methodology. (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Science-based MEMS reliability methodology. Citation Details In-Document Search Title: Science-based MEMS reliability methodology. No abstract prepared. Authors: Walraven, Jeremy ...

  15. VERA Core Simulator Methodology for PWR Cycle Depletion (Conference...

    Office of Scientific and Technical Information (OSTI)

    VERA Core Simulator Methodology for PWR Cycle Depletion Citation Details In-Document Search Title: VERA Core Simulator Methodology for PWR Cycle Depletion Authors: Kochunas, ...

  16. On the UQ methodology development for storage applications. ...

    Office of Scientific and Technical Information (OSTI)

    On the UQ methodology development for storage applications. Citation Details In-Document Search Title: On the UQ methodology development for storage applications. Abstract not ...

  17. Barr Engineering Statement of Methodology Rosemount Wind Turbine...

    Energy Savers [EERE]

    Barr Engineering Statement of Methodology Rosemount Wind Turbine Simulations by Truescape Visual Reality, DOEEA-1791 (May 2010) Barr Engineering Statement of Methodology Rosemount...

  18. Application of Random Vibration Theory Methodology for Seismic...

    Energy Savers [EERE]

    Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis Application of Random Vibration Theory Methodology for Seismic Soil-Structure...

  19. SASSI Methodology-Based Sensitivity Studies for Deeply Embedded...

    Office of Environmental Management (EM)

    SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded...

  20. Prototype integration of the joint munitions assessment and planning model with the OSD threat methodology

    SciTech Connect (OSTI)

    Lynn, R.Y.S.; Bolmarcich, J.J.

    1994-06-01

    The purpose of this Memorandum is to propose a prototype procedure which the Office of Munitions might employ to exercise, in a supportive joint fashion, two of its High Level Conventional Munitions Models, namely, the OSD Threat Methodology and the Joint Munitions Assessment and Planning (JMAP) model. The joint application of JMAP and the OSD Threat Methodology provides a tool to optimize munitions stockpiles. The remainder of this Memorandum comprises five parts. The first is a description of the structure and use of the OSD Threat Methodology. The second is a description of JMAP and its use. The third discusses the concept of the joint application of JMAP and OSD Threat Methodology. The fourth displays sample output of the joint application. The fifth is a summary and epilogue. Finally, three appendices contain details of the formulation, data, and computer code.

  1. Systematic Comparison of Operating Reserve Methodologies: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Krad, I.; Ela, E.

    2014-04-01

    Operating reserve requirements are a key component of modern power systems, and they contribute to maintaining reliable operations with minimum economic impact. No universal method exists for determining reserve requirements, thus there is a need for a thorough study and performance comparison of the different existing methodologies. Increasing penetrations of variable generation (VG) on electric power systems are posed to increase system uncertainty and variability, thus the need for additional reserve also increases. This paper presents background information on operating reserve and its relationship to VG. A consistent comparison of three methodologies to calculate regulating and flexibility reserve in systems with VG is performed.

  2. Water and Sediment Sampling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MDC Blank 7222014 Below MDC Below MDC Water Sampling Results Location Sample Date WIPP ... Tut Tank 3132014 Below MDC Below MDC Fresh Water Tank 3122014 Below MDC Below MDC Hill ...

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... 100, 17B, 1A, 72, and 81 were classified as Category II. The sample results were qualified with a "Q" flag, indicating the data are qualitative because of the sampling technique. ...

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... the applicable MDL. Inductively Coupled Plasma Interference Check Sample Analysis ... and background correction factors for all inductively coupled plasma instruments. ...

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........7 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Time-Concentration Graphs ...

  6. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........9 Water Sampling Field Activities Verification ... Data Durango Processing Site Surface Water Quality Data Equipment Blank Data Static ...

  7. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........3 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Natural Gas Analysis Data ...

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Hydrographs Time-Concentration ...

  9. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Hydrograph Time-Concentration ...

  10. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Time-Concentration Graph ...

  11. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Quality Data Equipment Blank Data Static Water Level Data Time-Concentration Graphs ...

  12. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Static Water Level Data Time-Concentration Graphs ...

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........9 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Static Water Level Data ...

  14. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........3 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Time-Concentration Graphs ...

  15. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........7 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Equipment Blank Data Static ...

  16. September 2004 Water Sampling

    Office of Legacy Management (LM)

    .........5 Water Sampling Field Activities Verification ... Groundwater Quality Data Surface Water Quality Data Equipment Blank Data Static ...

  17. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Water Sampling at the Ambrosia Lake, New Mexico, Disposal Site February 2015 LMS/AMB/S01114 This page intentionally left blank U.S. Department of Energy DVP-November 2014, Ambrosia Lake, New Mexico February 2015 RIN 14116607 Page i Contents Sampling Event Summary ...............................................................................................................1 Ambrosia Lake, NM, Disposal Site Planned Sampling Map...........................................................3 Data

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Sampling at the Ambrosia Lake, New Mexico, Disposal Site March 2016 LMS/AMB/S01215 This page intentionally left blank U.S. Department of Energy DVP-December 2015, Ambrosia Lake, New Mexico March 2016 RIN 15117494 Page i Contents Sampling Event Summary ...............................................................................................................1 Ambrosia Lake, NM, Disposal Site Planned Sampling Map...........................................................3 Data Assessment

  19. September 2004 Water Sampling

    Office of Legacy Management (LM)

    October 2013 Groundwater Sampling at the Bluewater, New Mexico, Disposal Site December 2013 LMS/BLU/S00813 This page intentionally left blank U.S. Department of Energy DVP-August and October 2013, Bluewater, New Mexico December 2013 RIN 13085537 and 13095651 Page i Contents Sampling Event Summary ...............................................................................................................1 Private Wells Sampled August 2013 and October 2013, Bluewater, NM, Disposal Site

  20. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and Surface Water Sampling at the Monument Valley, Arizona, Processing Site February 2015 LMS/MON/S01214 This page intentionally left blank U.S. Department of Energy DVP-December 2014, Monument Valley, Arizona February 2015 RIN 14126645 Page i Contents Sampling Event Summary ...............................................................................................................1 Monument Valley, Arizona, Disposal Site Sample Location Map ..................................................5

  1. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Alternate Water Supply System Sampling at the Riverton, Wyoming, Processing Site May 2014 LMS/RVT/S00314 This page intentionally left blank U.S. Department of Energy DVP-March 2014, Riverton, Wyoming May 2014 RIN 14035986 Page i Contents Sampling Event Summary ...............................................................................................................1 Riverton, WY, Processing Site, Sample Location Map ...................................................................3 Data

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    February 2015 Groundwater and Surface Water Sampling at the Grand Junction, Colorado, Site April 2015 LMS/GJO/S00215 This page intentionally left blank U.S. Department of Energy DVP-February 2015, Grand Junction, Colorado, Site April 2015 RIN 15026795 Page i Contents Sampling Event Summary ...............................................................................................................1 Grand Junction, Colorado, Site Sample Location Map

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Sampling at the Grand Junction, Colorado, Disposal Site November 2013 LMS/GRJ/S00813 This page intentionally left blank U.S. Department of Energy DVP-August 2013, Grand Junction, Colorado November 2013 RIN 13075515 Page i Contents Sampling Event Summary ...............................................................................................................1 Grand Junction, Colorado, Disposal Site Sample Location Map ....................................................3 Data Assessment

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Old and New Rifle, Colorado, Processing Sites August 2013 LMS/RFN/RFO/S00613 This page intentionally left blank U.S. Department of Energy DVP-June 2013, Rifle, Colorado August 2013 RIN 13065380 Page i Contents Sampling Event Summary ...............................................................................................................1 Sample Location Map, New Rifle, Colorado, Processing Site ........................................................5 Sample Location Map, Old Rifle,

  5. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Groundwater and Surface Water Sampling at the Slick Rock East and West, Colorado, Processing Sites November 2013 LMS/SRE/SRW/S0913 This page intentionally left blank U.S. Department of Energy DVP-September 2013, Slick Rock, Colorado November 2013 RIN 13095593 Page i Contents Sampling Event Summary ...............................................................................................................1 Slick Rock East and West, Colorado, Processing Sites, Sample Location Map

  6. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect (OSTI)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  7. Balancing aggregation and smoothing errors in inverse models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  8. Balancing aggregation and smoothing errors in inverse models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Turner, A. J.; Jacob, D. J.

    2015-01-13

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  9. Aerosol sampling system

    DOE Patents [OSTI]

    Masquelier, Donald A.

    2004-02-10

    A system for sampling air and collecting particulate of a predetermined particle size range. A low pass section has an opening of a preselected size for gathering the air but excluding particles larger than the sample particles. An impactor section is connected to the low pass section and separates the air flow into a bypass air flow that does not contain the sample particles and a product air flow that does contain the sample particles. A wetted-wall cyclone collector, connected to the impactor section, receives the product air flow and traps the sample particles in a liquid.

  10. Link error from craype/2.5.0

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Link error from craype/2.5.0 Link error from craype/2.5.0 January 13, 2016 by Woo-Sun Yang If you build a code using a file called 'configure' with craype/2.5.0, Cray build-tools assumes that you want to use the 'native' link mode (e.g., gcc defaults to dynamic linking), by adding '-Wl,-rpath=/opt/intel/composer_xe_2015/compiler/lib/intel64 -lintlc'. This creates a link error: /usr/bin/ld: cannot find -lintlc A temporary work around is to swap the default craype (2.5.0) with an older or newer

  11. Intel C++ compiler error: stl_iterator_base_types.h

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    C++ compiler error: stl_iterator_base_types.h Intel C++ compiler error: stl_iterator_base_types.h December 7, 2015 by Scott French Because the system-supplied version of GCC is relatively old (4.3.4) it is common practice to load the gcc module on our Cray systems when C++11 support is required under the Intel C++ compilers. While this works as expected under the GCC 4.8 and 4.9 series compilers, the 5.x series can cause Intel C++ compile-time errors similar to the following:

  12. Wind Power Forecasting Error Distributions: An International Comparison; Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.

    2012-09-01

    Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

  13. Sample Proficiency Test exercise

    SciTech Connect (OSTI)

    Alcaraz, A; Gregg, H; Koester, C

    2006-02-05

    The current format of the OPCW proficiency tests has multiple sets of 2 samples sent to an analysis laboratory. In each sample set, one is identified as a sample, the other as a blank. This method of conducting proficiency tests differs from how an OPCW designated laboratory would receive authentic samples (a set of three containers, each not identified, consisting of the authentic sample, a control sample, and a blank sample). This exercise was designed to test the reporting if the proficiency tests were to be conducted. As such, this is not an official OPCW proficiency test, and the attached report is one method by which LLNL might report their analyses under a more realistic testing scheme. Therefore, the title on the report ''Report of the Umpteenth Official OPCW Proficiency Test'' is meaningless, and provides a bit of whimsy for the analyses and readers of the report.

  14. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    SciTech Connect (OSTI)

    Li, T.S.; et al.

    2016-01-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  15. Servo control booster system for minimizing following error

    DOE Patents [OSTI]

    Wise, William L.

    1985-01-01

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  16. Error and tolerance studies for the SSC Linac

    SciTech Connect (OSTI)

    Raparia, D.; Chang, Chu Rui; Guy, F.; Hurd, J.W.; Funk, W.; Crandall, K.R.

    1993-05-01

    This paper summarizes error and tolerance studies for the SSC Linac. These studies also include higher-order multipoles. The codes used in these simulations are PARMTEQ, PARMILA, CCLDYN, PARTRACE, and CCLTRACE.

  17. Advisory on the reporting error in the combined propane stocks...

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    Advisory on the reporting error in the combined propane stocks for PADDs 4 and 5 Release Date: June 12, 2013 The U.S. Energy Information Administration issued the following...

  18. Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Milligan, M.

    2011-03-01

    In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

  19. Quantification of the effects of dependence on human error probabiliti...

    Office of Scientific and Technical Information (OSTI)

    In estimating the probabilities of human error in the performance of a series of tasks in a nuclear power plant, the situation-specific characteristics of the series must be ...

  20. WIPP Field Practices: Common Errors and Innovative Solutions | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy WIPP Field Practices: Common Errors and Innovative Solutions WIPP Field Practices: Common Errors and Innovative Solutions What to do when approaching an unfamiliar house for weatherization, with hidden air leakage and a multitude of mysteries? This webinar focuses on the Dos and Don'ts of WIPP weatherization, and is guaranteed to be an hour well spent looking over photographs that show detail and perspective on air sealing, blocking, venting, and other weatherization measures. This

  1. Visio-Error&OmissionNoClouds.vsd

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Error/Omission Process Process Owner: Department Managers, Corporate Projects and Facilities Projects February 7, 2008 KEY Responsibilities *A/E - Architectural/Engineering Firm *SCR - Sandia Contracting Representative *SDR - Sandia Delegated Representative *E&OB - Errors & Omissions Board * PM - Project Manager * REQ - Requester Facilities Projects Line Item Projects Review Design Findings and Begin Discovery PM Cost Impact? Yes Cost Impact <3% of ICAA? Yes Yes Take Out of Project

  2. Using doppler radar images to estimate aircraft navigational heading error

    DOE Patents [OSTI]

    Doerry, Armin W.; Jordan, Jay D.; Kim, Theodore J.

    2012-07-03

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  3. A Possible Calorimetric Error in Heavy Water Electrolysis on Platinum

    SciTech Connect (OSTI)

    Shanahan, K.L.

    2001-03-16

    A systematic error in mass flow calorimetry calibration procedures potentially capable of explaining most positive excess power measurements is described. Data recently interpreted as providing evidence of the Pons-Fleischmann effect with a platinum cathode are reinterpreted with the opposite conclusion. This indicates it is premature to conclude platinum displays a Pons and Fleischmann effect, and places the requirement to evaluate the error's magnitude on all mass flow calorimetric experiments.

  4. September 2004 Water Sampling

    Office of Legacy Management (LM)

    and September 2013 Groundwater and Surface Water Sampling at the Durango, Colorado, Disposal and Processing Sites March 2014 LMS/DUD/DUP/S00613 This page intentionally left blank U.S. Department of Energy DVP-June and September 2013, Durango, Colorado March 2014 RIN 13055370 and 13085577 Page i Contents Sampling Event Summary ...............................................................................................................1 Durango, Colorado, Disposal Site Sample Location Map-June

  5. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect (OSTI)

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  6. Monte Carlo analysis of localization errors in magnetoencephalography

    SciTech Connect (OSTI)

    Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.

    1989-01-01

    In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.

  7. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... The gross alpha, gross beta, radium-226, and radium-228 method blank results were below the DLC. Inductively Coupled Plasma (ICP) Interference Check Sample (ICS) Analysis ICP ...

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Groundwater, Surface Water, and Alternate Water Supply System Sampling at the Riverton, Wyoming, Processing Site December 2013 LMSRVTS00913 This page intentionally left blank ...

  9. Methodology and Process for Condition Assessment at Existing Hydropower Plants

    SciTech Connect (OSTI)

    Zhang, Qin Fen; Smith, Brennan T; Cones, Marvin; March, Patrick; Dham, Rajesh; Spray, Michael

    2012-01-01

    Hydropower Advancement Project was initiated by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy to develop and implement a systematic process with a standard methodology to identify the opportunities of performance improvement at existing hydropower facilities and to predict and trend the overall condition and improvement opportunity within the U.S. hydropower fleet. The concept of performance for the HAP focuses on water use efficiency how well a plant or individual unit converts potential energy to electrical energy over a long-term averaging period of a year or more. The performance improvement involves not only optimization of plant dispatch and scheduling but also enhancement of efficiency and availability through advanced technology and asset upgrades, and thus requires inspection and condition assessment for equipment, control system, and other generating assets. This paper discusses the standard methodology and process for condition assessment of approximately 50 nationwide facilities, including sampling techniques to ensure valid expansion of the 50 assessment results to the entire hydropower fleet. The application and refining process and the results from three demonstration assessments are also presented in this paper.

  10. Investigating surety methodologies for cognitive systems.

    SciTech Connect (OSTI)

    Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); Peercy, David Eugene; Mills, Kristy; Caldera, Eva

    2006-11-01

    Advances in cognitive science provide a foundation for new tools that promise to advance human capabilities with significant positive impacts. As with any new technology breakthrough, associated technical and non-technical risks are involved. Sandia has mitigated both technical and non-technical risks by applying advanced surety methodologies in such areas as nuclear weapons, nuclear reactor safety, nuclear materials transport, and energy systems. In order to apply surety to the development of cognitive systems, we must understand the concepts and principles that characterize the certainty of a system's operation as well as the risk areas of cognitive sciences. This SAND report documents a preliminary spectrum of risks involved with cognitive sciences, and identifies some surety methodologies that can be applied to potentially mitigate such risks. Some potential areas for further study are recommended. In particular, a recommendation is made to develop a cognitive systems epistemology framework for more detailed study of these risk areas and applications of surety methods and techniques.

  11. The Laboratory Microfusion Facility standardized costing methodology

    SciTech Connect (OSTI)

    Harris, D.B.; Dudziak, D.J.

    1988-01-01

    The DOE-organized Laboratory Microfusion Facility (LMF) has a goal of generation 1000 MJ of fusion yield in order to perform weapons physics experiments, simulate weapons effects, and develop high-gain inertial confinement fusion (ICF) targets for military and civil applications. There are currently three options seriously being considered for the driver of this facility: KrF lasers, Nd:glass lasers, and light-ion accelerators. In order to provide a basis for comparison of the cost estimated for each of the different driver technologies, a standardized costing methodology has been devised. This methodology defines the driver-independent costs and indirect cost multipliers for the LMF to aid in the comparison of the LMF proposal cost estimates. 10 refs., 4 tabs.

  12. Creating Sample Plans

    Energy Science and Technology Software Center (OSTI)

    1999-03-24

    The program has been designed to increase the accuracy and reduce the preparation time for completing sampling plans. It consists of our files 1. Analyte/Combination (AnalCombo) A list of analytes and combinations of analytes that can be requested of the onsite and offsite labs. Whenever a specific combination of analytes or suite names appear on the same line as the code number, this indicates that one sample can be placed in one bottle to bemore » analyzed for these paremeters. A code number is assigned for each analyte and combination of analytes. 2. Sampling Plans Database (SPDb) A database that contains all of the analytes and combinations of analytes along with the basic information required for preparing a sample plan. That basic information includes the following fields; matrix, hold time, preservation, sample volume, container size, if the bottle caps are taped, acceptable choices. 3. Sampling plans create (SPcreate) a file that will lookup information from the Sampling Plans Database and the Job Log File (JLF98) A major database used by Sample Managemnet Services for recording more than 100 fields of information.« less

  13. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Bluewater, New Mexico, Disposal Site February 2014 LMS/BLU/S01113 This page intentionally left blank U.S. Department of Energy DVP-November 2013, Bluewater, New Mexico February 2014 RIN 13115746 Page i Contents Sampling Event Summary ...............................................................................................................1 Bluewater, New Mexico, Disposal Site Sample Location Map.......................................................5 Data Assessment Summary

  14. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Burrell, Pennsylvania, Disposal Site January 2014 LMS/BUR/S01113 This page intentionally left blank U.S. Department of Energy DVP-November 2013, Burrell, Pennsylvania January 2014 RIN 13095638 Page i Contents Sampling Event Summary ...............................................................................................................1 Burrell, Pennsylvania, Disposal Site, Sample Location Map ..........................................................3 Data Assessment Summary

  15. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Canonsburg, Pennsylvania, Disposal Site February 2014 LMS/CAN/S01113 This page intentionally left blank U.S. Department of Energy DVP-November 2013, Canonsburg, Pennsylvania February 2014 RIN 13095639 Page i Contents Sampling Event Summary ...............................................................................................................1 Canonsburg, Pennsylvania, Disposal Site, Sample Location Map ..................................................3 Data Assessment Summary

  16. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Green River, Utah, Disposal Site August 2013 LMS/GRN/S00613 This page intentionally left blank U.S. Department of Energy DVP-June 2013, Green River, Utah August 2013 RIN 13065402 Page i Contents Sampling Event Summary ...............................................................................................................1 Data Assessment Summary ..............................................................................................................7 Water Sampling Field Activities

  17. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Disposal Site August 2014 LMS/LKD/S00514 This page intentionally left blank U.S. Department of Energy DVP-May 2014, Lakeview, Oregon, Disposal August 2014 RIN 14056157 Page i Contents Sampling Event Summary ...............................................................................................................1 Lakeview, Oregon, Disposal Site, Sample Location Map ...............................................................3 Data Assessment Summary

  18. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Processing Site August 2014 LMS/LKP/S00514 This page intentionally left blank U.S. Department of Energy DVP-May 2014, Lakeview, Oregon, Processing August 2014 RIN 14056157 and 14056158 Page i Contents Sampling Event Summary ...............................................................................................................1 Lakeview, Oregon, Processing Site, Sample Location Map ............................................................3 Data Assessment Summary

  19. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Riverton, Wyoming, Processing Site September 2013 LMS/RVT/S00613 This page intentionally left blank U.S. Department of Energy DVP-June 2013, Riverton, Wyoming September 2013 RIN 13065379 Page i Contents Sampling Event Summary ...............................................................................................................1 Riverton, Wyoming, Processing Site, Sample Location Map .........................................................5 Data Assessment Summary

  20. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Riverton, Wyoming, Processing Site February 2016 LMS/RVT/S00915 This page intentionally left blank U.S. Department of Energy DVP-September 2015, Riverton, Wyoming February 2016 RINs 15097345, 15097346, and 15097347 Page i Contents Sampling Event Summary ...............................................................................................................1 Riverton, Wyoming, Processing Site Planned Sampling Location Map .........................................7 Data Assessment Summary

  1. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Rifle, Colorado, New and Old Processing Sites January 2014 LMS/RFN/RFO/S01113 This page intentionally left blank U.S. Department of Energy DVP-November 2013, Rifle, Colorado January 2014 RIN 13115731 Page i Contents Sampling Event Summary ...............................................................................................................1 New Rifle, Colorado, Processing Site, Sample Location Map ........................................................5 Old Rifle, Colorado, Processing

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Old and New Rifle, Colorado, Processing Sites January 2015 LMS/RFN/RFO/S01114 This page intentionally left blank U.S. Department of Energy DVP-November 2014, Rifle, Colorado January 2015 RINs 14106568 and 14106569 Page i Contents Sampling Event Summary ...............................................................................................................1 New Rifle, Colorado, Processing Site, Planned Sampling Map ......................................................3 Old Rifle,

  3. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Slick Rock, Colorado, Processing Sites January 2016 LMS/SRE/SRW/S00915 This page intentionally left blank U.S. Department of Energy DVP-September 2015, Slick Rock, Colorado January 2016 RINs 15087319 and 15107424 Page i Contents Sampling Event Summary ...............................................................................................................1 Slick Rock, Colorado, Processing Sites, Sample Location Map .....................................................5 Data Assessment

  4. Sampling system and method

    DOE Patents [OSTI]

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  5. Adaptive Sampling Proxy Application

    Energy Science and Technology Software Center (OSTI)

    2012-10-22

    ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.

  6. Biological sample collector

    DOE Patents [OSTI]

    Murphy, Gloria A.

    2010-09-07

    A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

  7. Economic penalties of problems and errors in solar energy systems

    SciTech Connect (OSTI)

    Raman, K.; Sparkes, H.R.

    1983-01-01

    Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.

  8. September 2004 Water Sampling

    Office of Legacy Management (LM)

    ... 10 pCiL Liquid Scintillation LMR-15 Uranium Vanadium Zinc Total No. of Analytes 4 0 ... 26, 2013 TO: Rick Findlay FROM: Jeff Price SUBJECT: Trip Report (LTHMP Sampling) ...

  9. Water Sample Concentrator

    ScienceCinema (OSTI)

    Idaho National Laboratory

    2010-01-08

    Automated portable device that concentrates and packages a sample of suspected contaminated water for safe, efficient transport to a qualified analytical laboratory. This technology will help safeguard against pathogen contamination or chemical and biolog

  10. September 2004 Water Sampling

    Office of Legacy Management (LM)

    Groundwater, Surface Water, Produced Water, and Natural Gas Sampling at the Gasbuggy, New Mexico, Site October 2014 LMSGSBS00614 Available for sale to the public from: U.S. ...

  11. Dissolution actuated sample container

    DOE Patents [OSTI]

    Nance, Thomas A.; McCoy, Frank T.

    2013-03-26

    A sample collection vial and process of using a vial is provided. The sample collection vial has an opening secured by a dissolvable plug. When dissolved, liquids may enter into the interior of the collection vial passing along one or more edges of a dissolvable blocking member. As the blocking member is dissolved, a spring actuated closure is directed towards the opening of the vial which, when engaged, secures the vial contents against loss or contamination.

  12. SAMPLING AND ANALYSIS PROTOCOLS

    SciTech Connect (OSTI)

    Jannik, T; P Fledderman, P

    2007-02-09

    Radiological sampling and analyses are performed to collect data for a variety of specific reasons covering a wide range of projects. These activities include: Effluent monitoring; Environmental surveillance; Emergency response; Routine ambient monitoring; Background assessments; Nuclear license termination; Remediation; Deactivation and decommissioning (D&D); and Waste management. In this chapter, effluent monitoring and environmental surveillance programs at nuclear operating facilities and radiological sampling and analysis plans for remediation and D&D activities will be discussed.

  13. Liquid sampling system

    DOE Patents [OSTI]

    Larson, L.L.

    1984-09-17

    A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed. 5 figs.

  14. Liquid sampling system

    DOE Patents [OSTI]

    Larson, Loren L.

    1987-01-01

    A conduit extends from a reservoir through a sampling station and back to the reservoir in a closed loop. A jet ejector in the conduit establishes suction for withdrawing liquid from the reservoir. The conduit has a self-healing septum therein upstream of the jet ejector for receiving one end of a double-ended cannula, the other end of which is received in a serum bottle for sample collection. Gas is introduced into the conduit at a gas bleed between the sample collection bottle and the reservoir. The jet ejector evacuates gas from the conduit and the bottle and aspirates a column of liquid from the reservoir at a high rate. When the withdrawn liquid reaches the jet ejector the rate of flow therethrough reduces substantially and the gas bleed increases the pressure in the conduit for driving liquid into the sample bottle, the gas bleed forming a column of gas behind the withdrawn liquid column and interrupting the withdrawal of liquid from the reservoir. In the case of hazardous and toxic liquids, the sample bottle and the jet ejector may be isolated from the reservoir and may be further isolated from a control station containing remote manipulation means for the sample bottle and control valves for the jet ejector and gas bleed.

  15. Reducing collective quantum state rotation errors with reversible dephasing

    SciTech Connect (OSTI)

    Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.

    2014-12-29

    We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21?dB in the context of collective population measurements of the spin states of an ensemble of 2.110{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.

  16. Waste Package Component Design Methodology Report

    SciTech Connect (OSTI)

    D.C. Mecham

    2004-07-12

    This Executive Summary provides an overview of the methodology being used by the Yucca Mountain Project (YMP) to design waste packages and ancillary components. This summary information is intended for readers with general interest, but also provides technical readers a general framework surrounding a variety of technical details provided in the main body of the report. The purpose of this report is to document and ensure appropriate design methods are used in the design of waste packages and ancillary components (the drip shields and emplacement pallets). The methodology includes identification of necessary design inputs, justification of design assumptions, and use of appropriate analysis methods, and computational tools. This design work is subject to ''Quality Assurance Requirements and Description''. The document is primarily intended for internal use and technical guidance for a variety of design activities. It is recognized that a wide audience including project management, the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission, and others are interested to various levels of detail in the design methods and therefore covers a wide range of topics at varying levels of detail. Due to the preliminary nature of the design, readers can expect to encounter varied levels of detail in the body of the report. It is expected that technical information used as input to design documents will be verified and taken from the latest versions of reference sources given herein. This revision of the methodology report has evolved with changes in the waste package, drip shield, and emplacement pallet designs over many years and may be further revised as the design is finalized. Different components and analyses are at different stages of development. Some parts of the report are detailed, while other less detailed parts are likely to undergo further refinement. The design methodology is intended to provide designs that satisfy the safety and operational

  17. Optimal sampling efficiency in Monte Carlo sampling with an approximat...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Optimal sampling efficiency in Monte Carlo sampling with an approximate potential Citation Details In-Document Search Title: Optimal sampling efficiency in Monte ...

  18. Laser Phase Errors in Seeded Free Electron Lasers

    SciTech Connect (OSTI)

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-04-17

    Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  19. MPI errors from cray-mpich/7.3.0

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MPI errors from cray-mpich/7.3.0 MPI errors from cray-mpich/7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping buffers in MPI collectives may cause some MPI applications that use overlapping buffers to fail at runtime. As an example, one of the routines affected is MPI_ALLGATHER. There are several possible fixes. The cleanest one is to specify MPI_IN_PLACE instead of the address of the send buffer for cases where sendbuf and

  20. When soft controls get slippery: User interfaces and human error

    SciTech Connect (OSTI)

    Stubler, W.F.; O`Hara, J.M.

    1998-12-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety.

  1. Scalable error correction in distributed ion trap computers

    SciTech Connect (OSTI)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-11-15

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment.

  2. Finite Bandwidth Related Errors in Noise Parameter Determination of PHEMTs

    SciTech Connect (OSTI)

    Wiatr, Wojciech

    2005-08-25

    We analyze errors in the determination of the four noise parameters due to finite measurement bandwidth and the delay time in the source circuit. The errors are especially large when characterizing low-noise microwave transistors at low microwave frequencies. They result from the spectral noise density variation across the measuring receiver band, due to resonant interaction of the highly mismatched transistor input with the source termination. We show also effects of virtual de-correlation of transistor's noise waves due to finite delay time at the input.

  3. JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned

    SciTech Connect (OSTI)

    Frank Marhauser

    2011-09-01

    Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.

  4. A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN

    SciTech Connect (OSTI)

    Shine, G.

    2014-11-25

    , the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.

  5. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    SciTech Connect (OSTI)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  6. Identification of potential biases in the characterization sampling and analysis process

    SciTech Connect (OSTI)

    Winkelman, W.D.; Eberlein, S.J.

    1995-12-01

    The Tank Waste Remediation System (TWRS) Characterization Project is responsible for providing quality characterization data to TWRS. Documentation of sampling and analysis process errors and biases can be used to improve the process to provide that data. The sampling and analysis process consists of removing a sample from a specified waste tank, getting it to the laboratory and analyzing it to provide the data identified in the Tank Characterization Plan (TCP) and Sampling and Analysis Plan (SAP). To understand the data fully, an understanding of errors or biases that can be generated during the process is necessary. Most measurement systems have the ability statistically to detect errors and biases by using standards and alternate measurement techniques. Only the laboratory analysis part of the tank sampling and analysis process at TWRS has this ability. Therefore, it is necessary to use other methods to identify and prioritize the biases involved in the process.

  7. Fluid sampling system

    DOE Patents [OSTI]

    Houck, Edward D.

    1994-01-01

    An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank.

  8. Fluid sampling system

    DOE Patents [OSTI]

    Houck, E.D.

    1994-10-11

    An fluid sampling system allows sampling of radioactive liquid without spillage. A feed tank is connected to a liquid transfer jet powered by a pumping chamber pressurized by compressed air. The liquid is pumped upwardly into a sampling jet of a venturi design having a lumen with an inlet, an outlet, a constricted middle portion, and a port located above the constricted middle portion. The liquid is passed under pressure through the constricted portion causing its velocity to increase and its pressure to be decreased, thereby preventing liquid from escaping. A septum sealing the port can be pierced by a two pointed hollow needle leading into a sample bottle also sealed by a pierceable septum affixed to one end. The bottle is evacuated by flow through the sample jet, cyclic variation in the sampler jet pressure periodically leaves the evacuated bottle with lower pressure than that of the port, thus causing solution to pass into the bottle. The remaining solution in the system is returned to the feed tank via a holding tank. 4 figs.

  9. Visual Sample Plan

    Energy Science and Technology Software Center (OSTI)

    2007-10-25

    VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 5.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sitesmore » suspected of contamination. The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (98, NT, 2000, Millennium Edition, CE, and XP) Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem./rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for UXO identification.« less

  10. V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ISC BIND RUNTIMECHECK Error Lets Remote Users Deny Service Against Recursive Resolvers V-172: ISC BIND RUNTIMECHECK Error Lets Remote Users Deny Service Against Recursive...

  11. Transuranic waste characterization sampling and analysis plan

    SciTech Connect (OSTI)

    NONE

    1994-12-31

    Los Alamos National Laboratory (the Laboratory) is located approximately 25 miles northwest of Santa Fe, New Mexico, situated on the Pajarito Plateau. Technical Area 54 (TA-54), one of the Laboratory`s many technical areas, is a radioactive and hazardous waste management and disposal area located within the Laboratory`s boundaries. The purpose of this transuranic waste characterization, sampling, and analysis plan (CSAP) is to provide a methodology for identifying, characterizing, and sampling approximately 25,000 containers of transuranic waste stored at Pads 1, 2, and 4, Dome 48, and the Fiberglass Reinforced Plywood Box Dome at TA-54, Area G, of the Laboratory. Transuranic waste currently stored at Area G was generated primarily from research and development activities, processing and recovery operations, and decontamination and decommissioning projects. This document was created to facilitate compliance with several regulatory requirements and program drivers that are relevant to waste management at the Laboratory, including concerns of the New Mexico Environment Department.

  12. Methodologies for Reservoir Characterization Using Fluid Inclusion Gas

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Chemistry | Department of Energy Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry presentation at the April 2013 peer review meeting held in Denver, Colorado. dilley_methodologies_peer2013.pdf (2.79 MB) More Documents & Publications Innovative Computational Tools for Reducing Exploration Risk

  13. A Proposed Methodology to Determine the Leverage Impacts of Technology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Deployment Programs 2008 | Department of Energy A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 This report contains a proposed methodology to determine the leverage impacts of technology deployment programs for the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy. Proposed Methodology Report (1.17 MB) More Documents &

  14. Energy Intensity Indicators: Methodology Downloads | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Indicators: Methodology Downloads Energy Intensity Indicators: Methodology Downloads The files listed below contain methodology documentation and related studies that support the information presented on this website. The files are available to view and/or download as Adobe Acrobat PDF files. Energy Indicators System: Index Construction Methodology (101.17 KB) Changing the Base Year for the Index (23.98 KB) "A Note on the Fisher Ideal Index Decomposition for Structural Change in Energy

  15. DIGITAL TECHNOLOGY BUSINESS CASE METHODOLOGY GUIDE & WORKBOOK

    SciTech Connect (OSTI)

    Thomas, Ken; Lawrie, Sean; Hart, Adam; Vlahoplus, Chris

    2014-09-01

    Performance advantages of the new digital technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. This Business Case Methodology approaches building a business case for a particular technology or suite of technologies by detailing how they impact an operator in one or more of the three following areas: Labor Costs, Non-Labor Costs, and Key Performance Indicators (KPIs). Key to those impacts will be identifying where the savings are harvestable, meaning they result in an actual reduction in headcount and/or cost. The report consists of a Digital Technology Business Case Methodology Guide and an accompanying spreadsheet workbook that will enable the user to develop a business case.

  16. Viscous sludge sample collector

    DOE Patents [OSTI]

    Beitel, George A [Richland, WA

    1983-01-01

    A vertical core sample collection system for viscous sludge. A sample tube's upper end has a flange and is attached to a piston. The tube and piston are located in the upper end of a bore in a housing. The bore's lower end leads outside the housing and has an inwardly extending rim. Compressed gas, from a storage cylinder, is quickly introduced into the bore's upper end to rapidly accelerate the piston and tube down the bore. The lower end of the tube has a high sludge entering velocity to obtain a full-length sludge sample without disturbing strata detail. The tube's downward motion is stopped when its upper end flange impacts against the bore's lower end inwardly extending rim.

  17. Sampling Report for August 15, 2014 WIPP Samples

    Office of Environmental Management (EM)

    X X X X X Sampling Report for August 15, 2014 WIPP Samples UNCLASSIFIED Forensic Science Center December 19, 2014 Sampling Report for August 15 2014 WIPP Samples Lawrence ...

  18. DOE Challenge Home Label Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Label Methodology DOE Challenge Home Label Methodology A document of the U.S. Department of Energy's Zero Energy Ready Home (formerly Challenge Home) program. ch_label_methodology_1012.pdf (222.71 KB) More Documents & Publications DOE Zero Energy Ready Home Partner Resources Indoor airPLUS Construction Specifications Indoor airPLUS Construction Specifications Version 1 (Rev. 02)

  19. Shape error analysis for reflective nano focusing optics

    SciTech Connect (OSTI)

    Modi, Mohammed H.; Idir, Mourad

    2010-06-23

    Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.

  20. Servo control booster system for minimizing following error

    DOE Patents [OSTI]

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  1. Contributions to Human Errors and Breaches in National Security Applications.

    SciTech Connect (OSTI)

    Pond, D. J.; Houghton, F. K.; Gilmore, W. E.

    2002-01-01

    Los Alamos National Laboratory has recognized that security infractions are often the consequence of various types of human errors (e.g., mistakes, lapses, slips) and/or breaches (i.e., deliberate deviations from policies or required procedures with no intention to bring about an adverse security consequence) and therefore has established an error reduction program based in part on the techniques used to mitigate hazard and accident potentials. One cornerstone of this program, definition of the situational and personal factors that increase the likelihood of employee errors and breaches, is detailed here. This information can be used retrospectively (as in accident investigations) to support and guide inquiries into security incidents or prospectively (as in hazard assessments) to guide efforts to reduce the likelihood of error/incident occurrence. Both approaches provide the foundation for targeted interventions to reduce the influence of these factors and for the formation of subsequent 'lessons learned.' Overall security is enhanced not only by reducing the inadvertent releases of classified information but also by reducing the security and safeguards resources devoted to them, thereby allowing these resources to be concentrated on acts of malevolence.

  2. Error Detection and Correction LDMS Plugin Version 1.0

    SciTech Connect (OSTI)

    Shoga, Kathleen; Allan, Ben

    2015-11-02

    Sandia's Lightweight Distributed Metric Service (LDMS) is a data collection and transport system used at Livermore Computing to gather performance data across the center. While Sandia has a set of plugins available, they do not include all the data we need to capture. The ECAC plugin that we have developed enables collection of the Error Detection and Correction (EDAC) counters.

  3. Error field and magnetic diagnostic modeling for W7-X

    SciTech Connect (OSTI)

    Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  4. The contour method cutting assumption: error minimization and correction

    SciTech Connect (OSTI)

    Prime, Michael B; Kastengren, Alan L

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  5. Gasoline Vehicle Exhuast Particle Sampling Study | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Evaluation of the European PMP Methodologies Using Chassis Dynamometer and ...

  6. ANNULAR IMPACTOR SAMPLING DEVICE

    DOE Patents [OSTI]

    Tait, G.W.C.

    1959-03-31

    A high-rate air sampler capable of sampling alphaemitting particles as small as 0.5 microns is described. The device is a cylindrical shaped cup that fits in front of a suction tube and which has sticky grease coating along its base. Suction forces contaminated air against the periodically monitored particle absorbing grease.

  7. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  8. Field Sampling | Open Energy Information

    Open Energy Info (EERE)

    Field Mapping Hand-held X-Ray Fluorescence (XRF) Macrophotography Portable X-Ray Diffraction (XRD) Field Sampling Gas Sampling Gas Flux Sampling Soil Gas Sampling Surface Gas...

  9. SU-E-T-170: Evaluation of Rotational Errors in Proton Therapy Planning of Lung Cancer

    SciTech Connect (OSTI)

    Rana, S; Zhao, L; Ramirez, E; Singh, H; Zheng, Y

    2014-06-01

    Purpose: To investigate the impact of rotational (roll, yaw, and pitch) errors in proton therapy planning of lung cancer. Methods: A lung cancer case treated at our center was used in this retrospective study. The original plan was generated using two proton fields (posterior-anterior and left-lateral) with XiO treatment planning system (TPS) and delivered using uniform scanning proton therapy system. First, the computed tomography (CT) set of original lung treatment plan was re-sampled for rotational (roll, yaw, and pitch) angles ranged from ?5 to +5, with an increment of 2.5. Second, 12 new proton plans were generated in XiO using the 12 re-sampled CT datasets. The same beam conditions, isocenter, and devices were used in new treatment plans as in the original plan. All 12 new proton plans were compared with original plan for planning target volume (PTV) coverage and maximum dose to spinal cord (cord Dmax). Results: PTV coverage was reduced in all 12 new proton plans when compared to that of original plan. Specifically, PTV coverage was reduced by 0.03% to 1.22% for roll, by 0.05% to 1.14% for yaw, and by 0.10% to 3.22% for pitch errors. In comparison to original plan, the cord Dmax in new proton plans was reduced by 8.21% to 25.81% for +2.5 to +5 pitch, by 5.28% to 20.71% for +2.5 to +5 yaw, and by 5.28% to 14.47% for ?2.5 to ?5 roll. In contrast, cord Dmax was increased by 3.80% to 3.86% for ?2.5 to ?5 pitch, by 0.63% to 3.25% for ?2.5 to ?5 yaw, and by 3.75% to 4.54% for +2.5 to +5 roll. Conclusion: PTV coverage was reduced by up to 3.22% for rotational error of 5. The cord Dmax could increase or decrease depending on the direction of rotational error, beam angles, and the location of lung tumor.

  10. Background Information on CBECS

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    Information on CBECS The following topics provide detailed information on survey methodology, the kinds of errors associated with sample surveys, estimation of standard errors,...

  11. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    Reports and Publications (EIA)

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  12. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require 𝒪( p ) analysis timemore » per MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  13. Comparison of Wind Power and Load Forecasting Error Distributions: Preprint

    SciTech Connect (OSTI)

    Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.

    2012-07-01

    The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.

  14. Method and system for reducing errors in vehicle weighing systems

    DOE Patents [OSTI]

    Hively, Lee M.; Abercrombie, Robert K.

    2010-08-24

    A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.

  15. Some aspects of statistical modeling of human-error probability

    SciTech Connect (OSTI)

    Prairie, R. R.

    1982-01-01

    Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element.

  16. Posters The Impacts of Data Error and Model Resolution

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    7 Posters The Impacts of Data Error and Model Resolution on the Result of Variational Data Assimilation S. Yang and Q. Xu Cooperative Institute of Mesoscale Meteorological Studies University of Oklahoma Norman, Oklahoma Introduction The representativeness and accuracy of the measurements or estimates of the lateral boundary fluxes and surface fluxes are crucial for the single-column model and budget studies of climatic variables over Atmospheric Radiation Measurement (ARM) sites. Since the

  17. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect (OSTI)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  18. Runtime Detection of C-Style Errors in UPC Code

    SciTech Connect (OSTI)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  19. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-09-12

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76Ge. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology, possibly one under development at Nonlinear Ion Dynamics (NID), will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL in January 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are reported here. A second sample of isotopically separated copper was provided by NID to PNNL in August 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are also reported here.

  20. Stack sampling apparatus

    DOE Patents [OSTI]

    Lind, Randall F; Lloyd, Peter D; Love, Lonnie J; Noakes, Mark W; Pin, Francois G; Richardson, Bradley S; Rowe, John C

    2014-09-16

    An apparatus for obtaining samples from a structure includes a support member, at least one stabilizing member, and at least one moveable member. The stabilizing member has a first portion coupled to the support member and a second portion configured to engage with the structure to restrict relative movement between the support member and the structure. The stabilizing member is radially expandable from a first configuration where the second portion does not engage with a surface of the structure to a second configuration where the second portion engages with the surface of the structure.

  1. Germanium-76 Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Engelhard, Mark H.; Zhu, Zihua

    2011-04-01

    The MAJORANA DEMONSTRATOR is a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). The DEMONSTRATOR will utilize 76Ge from Russia, and the first one gram sample was received from the supplier for analysis on April 24, 2011. The Environmental Molecular Sciences facility, a DOE user facility at PNNL, was used to make the required isotopic and chemical purity measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR. The results of this first analysis are reported here.

  2. September 2004 Water Sampling

    Office of Legacy Management (LM)

    4 Groundwater Sampling at the Central Nevada Test Area February 2015 LMS/CNT/S01214 Available for sale to the public from: U.S. Department of Commerce National Technical Information Service 5301 Shawnee Road Alexandria, VA 22312 Telephone: 800.553.6847 Fax: 703.605.6900 E-mail: orders@ntis.gov Online Ordering: http://www.ntis.gov/help/ordermethods.aspx Available electronically at http://www.osti.gov/scitech/ Available for a processing fee to U.S. Department of Energy and its contractors, in

  3. Pulsed field sample neutralization

    DOE Patents [OSTI]

    Appelhans, Anthony D.; Dahl, David A.; Delmore, James E.

    1990-01-01

    An apparatus and method for alternating voltage and for varying the rate of extraction during the extraction of secondary particles, resulting in periods when either positive ions, or negative ions and electrons are extracted at varying rates. Using voltage with alternating charge during successive periods to extract particles from materials which accumulate charge opposite that being extracted causes accumulation of surface charge of opposite sign. Charge accumulation can then be adjusted to a ratio which maintains a balance of positive and negative charge emission, thus maintaining the charge neutrality of the sample.

  4. Post-Award Deliverables Sample (Part 2 of Sample Deliverables...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    J-4) Post-Award Deliverables Sample (Part 2 of Sample Deliverables for Task Orders, IDIQ Attachment. J-4) Document offers a post-award deliverables sample for an energy savings ...

  5. Microsphere estimates of blood flow: Methodological considerations

    SciTech Connect (OSTI)

    von Ritter, C.; Hinder, R.A.; Womack, W.; Bauerfeind, P.; Fimmel, C.J.; Kvietys, P.R.; Granger, D.N.; Blum, A.L. Louisianna State Univ. Medical Center, Shreveport Universitaire Vaudois )

    1988-02-01

    The microsphere technique is a standard method for measuring blood flow in experimental animals. Sporadic reports have appeared outlining the limitations of this method. In this study the authors have systematically assessed the effect of blood withdrawals for reference sampling, microsphere numbers, and anesthesia on blood flow estimates using radioactive microspheres in dogs. Experiments were performed on 18 conscious and 12 anesthetized dogs. Four blood flow estimates were performed over 120 min using 1 {times} 10{sup 6} microspheres each time. The effects of excessive numbers of microspheres pentobarbital sodium anesthesia, and replacement of volume loss for reference samples with dextran 70 were assessed. In both conscious and anesthetized dogs a progressive decrease in gastric mucosal blood flow and cardiac output was observed over 120 min. This was also observed in the pancreas in conscious dogs. The major factor responsible for these changes was the volume loss due to the reference sample withdrawals. Replacement of the withdrawn blood with dextran 70 led to stable blood flows to all organs. The injection of excessive numbers of microspheres did not modify hemodynamics to a greater extent than did the injection of 4 million microspheres. Anesthesia exerted no influence on blood flow other than raising coronary flow. The authors conclude that although blood flow to the gastric mucosa and the pancreas is sensitive to the minor hemodynamic changes associated with the microsphere technique, replacement of volume loss for reference samples ensures stable blood flow to all organs over a 120-min period.

  6. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    SciTech Connect (OSTI)

    Barriere, M.T.; Luckas, W.J.; Wreathall, J.; Cooper, S.E.; Bley, D.C.; Ramey-Smith, A.

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  7. Fluid sampling tool

    DOE Patents [OSTI]

    Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.

    1999-05-25

    A fluid sampling tool for sampling fluid from a container. The tool has a fluid collecting portion which is drilled into the container wall, thereby affixing it to the wall. The tool may have a fluid extracting section which withdraws fluid collected by the fluid collecting section. The fluid collecting section has a fluted shank with an end configured to drill a hole into a container wall. The shank has a threaded portion for tapping the borehole. The shank is threadably engaged to a cylindrical housing having an inner axial passageway sealed at one end by a septum. A flexible member having a cylindrical portion and a bulbous portion is provided. The housing can be slid into an inner axial passageway in the cylindrical portion and sealed to the flexible member. The bulbous portion has an outer lip defining an opening. The housing is clamped into the chuck of a drill, the lip of the bulbous section is pressed against a container wall until the shank touches the wall, and the user operates the drill. Wall shavings (kerf) are confined in a chamber formed in the bulbous section as it folds when the shank advances inside the container. After sufficient advancement of the shank, an o-ring makes a seal with the container wall.

  8. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-02-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76Ge. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  9. Fluid sampling tool

    DOE Patents [OSTI]

    Garcia, A.R.; Johnston, R.G.; Martinez, R.K.

    1999-05-25

    A fluid sampling tool is described for sampling fluid from a container. The tool has a fluid collecting portion which is drilled into the container wall, thereby affixing it to the wall. The tool may have a fluid extracting section which withdraws fluid collected by the fluid collecting section. The fluid collecting section has a fluted shank with an end configured to drill a hole into a container wall. The shank has a threaded portion for tapping the borehole. The shank is threadably engaged to a cylindrical housing having an inner axial passageway sealed at one end by a septum. A flexible member having a cylindrical portion and a bulbous portion is provided. The housing can be slid into an inner axial passageway in the cylindrical portion and sealed to the flexible member. The bulbous portion has an outer lip defining an opening. The housing is clamped into the chuck of a drill, the lip of the bulbous section is pressed against a container wall until the shank touches the wall, and the user operates the drill. Wall shavings (kerf) are confined in a chamber formed in the bulbous section as it folds when the shank advances inside the container. After sufficient advancement of the shank, an o-ring makes a seal with the container wall. 6 figs.

  10. Sample introducing apparatus and sample modules for mass spectrometer

    DOE Patents [OSTI]

    Thompson, Cyril V.; Wise, Marcus B.

    1993-01-01

    An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus.

  11. Sample introducing apparatus and sample modules for mass spectrometer

    DOE Patents [OSTI]

    Thompson, C.V.; Wise, M.B.

    1993-12-21

    An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus. 5 figures.

  12. Energy Department Hosts FORGE Webinar and Resource Reporting Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Workshop at the Upcoming National Geothermal Summit, August 4-5 | Department of Energy Hosts FORGE Webinar and Resource Reporting Methodology Workshop at the Upcoming National Geothermal Summit, August 4-5 Energy Department Hosts FORGE Webinar and Resource Reporting Methodology Workshop at the Upcoming National Geothermal Summit, August 4-5 July 29, 2014 - 1:34pm Addthis Energy Department Hosts FORGE Webinar and Resource Reporting Methodology Workshop at the Upcoming National Geothermal

  13. SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures,

    Office of Environmental Management (EM)

    Such As Small Modular Reactors (SMRs) | Department of Energy SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) Dr. Dan M. Ghiocel Ghiocel Predictive Technologies Inc. http://www.ghiocel-tech.com 2014 DOE

  14. Soil sampling kit and a method of sampling therewith

    DOE Patents [OSTI]

    Thompson, Cyril V.

    1991-01-01

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.

  15. Soil sampling kit and a method of sampling therewith

    DOE Patents [OSTI]

    Thompson, C.V.

    1991-02-05

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allows an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds. 11 figures.

  16. Fluid sampling apparatus and method

    DOE Patents [OSTI]

    Yeamans, David R.

    1998-01-01

    Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis.

  17. Fluid sampling apparatus and method

    DOE Patents [OSTI]

    Yeamans, D.R.

    1998-02-03

    Incorporation of a bellows in a sampling syringe eliminates ingress of contaminants, permits replication of amounts and compression of multiple sample injections, and enables remote sampling for off-site analysis. 3 figs.

  18. Fluid sampling tool

    DOE Patents [OSTI]

    Johnston, Roger G.; Garcia, Anthony R. E.; Martinez, Ronald K.

    2001-09-25

    The invention includes a rotatable tool for collecting fluid through the wall of a container. The tool includes a fluid collection section with a cylindrical shank having an end portion for drilling a hole in the container wall when the tool is rotated, and a threaded portion for tapping the hole in the container wall. A passageway in the shank in communication with at least one radial inlet hole in the drilling end and an opening at the end of the shank is adapted to receive fluid from the container. The tool also includes a cylindrical chamber affixed to the end of the shank opposite to the drilling portion thereof for receiving and storing fluid passing through the passageway. The tool also includes a flexible, deformable gasket that provides a fluid-tight chamber to confine kerf generated during the drilling and tapping of the hole. The invention also includes a fluid extractor section for extracting fluid samples from the fluid collecting section.

  19. The Development and Application of NMR Methodologies for the...

    Office of Scientific and Technical Information (OSTI)

    in Complex Silicones Citation Details In-Document Search Title: The Development and Application of NMR Methodologies for the Study of Degradation in Complex Silicones ...

  20. Hydrogen Program Goal-Setting Methodologies Report to Congress

    Broader source: Energy.gov [DOE]

    This Report to Congress, published in August 2006, focuses on the methodologies used by the DOE Hydrogen Program for goal-setting.

  1. Quality Guidline for Cost Estimation Methodology for NETL Assessments...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Benefits 2 Power Plant Cost Estimation Methodology Quality Guidelines for Energy System Studies April 2011 Disclaimer This report was prepared as an account of work...

  2. PROLIFERATION RESISTANCE AND PHYSICAL PROTECTION WORKING GROUP: METHODOLOGY AND APPLICATIONS

    SciTech Connect (OSTI)

    Bari R. A.; Whitlock, J.; Therios, I.U.; Peterson, P.F.

    2012-11-14

    We summarize the technical progress and accomplishments on the evaluation methodology for proliferation resistance and physical protection (PR and PP) of Generation IV nuclear energy systems. We intend the results of the evaluations performed with the methodology for three types of users: system designers, program policy makers, and external stakeholders. The PR and PP Working Group developed the methodology through a series of demonstration and case studies. Over the past few years various national and international groups have applied the methodology to nuclear energy system designs as well as to developing approaches to advanced safeguards.

  3. Session #1: Cutting Edge Methodologies--Beyond Current DFT

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Session 1: Cutting Edge Methodologies (beyond Current DFT) Moderator: Shengbai Zhang (RPI REL) Topics to be addressed: Benchmarking state-of-the-art approaches, accurate energy ...

  4. National Academies Criticality Methodology and Assessment Video (Text Version)

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a text version of the "National Academies Criticality Methodology and Assessment" video presented at the Critical Materials Workshop, held on April 3, 2012 in Arlington, Virginia.

  5. The Development and Application of NMR Methodologies for the...

    Office of Scientific and Technical Information (OSTI)

    Development and Application of NMR Methodologies for the Study of Degradation in Complex Silicones Citation Details In-Document Search Title: The Development and Application of NMR...

  6. Proliferation Resistance and Physical Protection Evaluation Methodology Development and Applications

    SciTech Connect (OSTI)

    Bari,R.A.; Bari, R.; Peterson, P.; Therios, I.; Whitlock, J.

    2009-07-08

    An overview of the technical progress and accomplishments on the evaluation methodology for proliferation resistance and physical protection of Generation IV nuclear energy Systems.

  7. Security Risk Assessment Methodologies (RAM) for Critical Infrastructu...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Building Energy Efficiency Find More Like This Return to Search Security Risk Assessment Methodologies (RAM) for Critical Infrastructures Sandia National Laboratories...

  8. Towards Developing a Calibrated EGS Exploration Methodology Using...

    Open Energy Info (EERE)

    Towards Developing a Calibrated EGS Exploration Methodology Using the Dixie Valley Geothermal System, Nevada Jump to: navigation, search OpenEI Reference LibraryAdd to library...

  9. Egs Exploration Methodology Project Using the Dixie Valley Geothermal...

    Open Energy Info (EERE)

    Egs Exploration Methodology Project Using the Dixie Valley Geothermal System, Nevada, Status Update Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference...

  10. UNFCCC-Consolidated baseline and monitoring methodology for landfill...

    Open Energy Info (EERE)

    Consolidated baseline and monitoring methodology for landfill gas project activities Jump to: navigation, search Tool Summary LAUNCH TOOL Name: UNFCCC-Consolidated baseline and...

  11. Energy Efficiency Standards for Refrigerators in Brazil: A Methodology...

    Open Energy Info (EERE)

    Standards for Refrigerators in Brazil: A Methodology for Impact Evaluation Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Efficiency Standards for Refrigerators...

  12. A Review of Geothermal Resource Estimation Methodology | Open...

    Open Energy Info (EERE)

    Geothermal Resource Estimation Methodology Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference Paper: A Review of Geothermal Resource Estimation...

  13. Methodology for Carbon Accounting of Grouped Mosaic and Landscape...

    Open Energy Info (EERE)

    REDD Projects Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Methodology for Carbon Accounting of Grouped Mosaic and Landscape-scale REDD Projects Agency...

  14. Survey of Transmission Cost Allocation Methodologies for Regional Transmission Organizations

    SciTech Connect (OSTI)

    Fink, S.; Porter, K.; Mudd, C.; Rogers, J.

    2011-02-01

    The report presents transmission cost allocation methodologies for reliability transmission projects, generation interconnection, and economic transmission projects for all Regional Transmission Organizations.

  15. Rock Sampling | Open Energy Information

    Open Energy Info (EERE)

    resource at depth. These hand samples can be collected using a rock hammer or sledge. Data Access and Acquisition Under a detailed investigation, a systematic sampling procedure...

  16. Water Sampling | Open Energy Information

    Open Energy Info (EERE)

    Water Sampling Details Activities (63) Areas (51) Regions (5) NEPA(2) Exploration Technique Information Exploration Group: Field Techniques Exploration Sub Group: Field Sampling...

  17. Methodology for Scaling Fusion Power Plant Availability

    SciTech Connect (OSTI)

    Lester M. Waganer

    2011-01-04

    Normally in the U.S. fusion power plant conceptual design studies, the development of the plant availability and the plant capital and operating costs makes the implicit assumption that the plant is a 10th of a kind fusion power plant. This is in keeping with the DOE guidelines published in the 1970s, the PNL report1, "Fusion Reactor Design Studies - Standard Accounts for Cost Estimates. This assumption specifically defines the level of the industry and technology maturity and eliminates the need to define the necessary research and development efforts and costs to construct a one of a kind or the first of a kind power plant. It also assumes all the "teething" problems have been solved and the plant can operate in the manner intended. The plant availability analysis assumes all maintenance actions have been refined and optimized by the operation of the prior nine or so plants. The actions are defined to be as quick and efficient as possible. This study will present a methodology to enable estimation of the availability of the one of a kind (one OAK) plant or first of a kind (1st OAK) plant. To clarify, one of the OAK facilities might be the pilot plant or the demo plant that is prototypical of the next generation power plant, but it is not a full-scale fusion power plant with all fully validated "mature" subsystems. The first OAK facility is truly the first commercial plant of a common design that represents the next generation plant design. However, its subsystems, maintenance equipment and procedures will continue to be refined to achieve the goals for the 10th OAK power plant.

  18. Relative Hazard and Risk Measure Calculation Methodology

    SciTech Connect (OSTI)

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.

    2004-03-20

    The relative hazard (RH) and risk measure (RM) methodology and computer code is a health risk-based tool designed to allow managers and environmental decision makers the opportunity to readily consider human health risks (i.e., public and worker risks) in their screening-level analysis of alternative cleanup strategies. Environmental management decisions involve consideration of costs, schedules, regulatory requirements, health hazards, and risks. The RH-RM tool is a risk-based environmental management decision tool that allows managers the ability to predict and track health hazards and risks over time as they change in relation to mitigation and cleanup actions. Analysis of the hazards and risks associated with planned mitigation and cleanup actions provides a baseline against which alternative strategies can be compared. This new tool allows managers to explore “what if scenarios,” to better understand the impact of alternative mitigation and cleanup actions (i.e., alternatives to the planned actions) on health hazards and risks. This new tool allows managers to screen alternatives on the basis of human health risk and compare the results with cost and other factors pertinent to the decision. Once an alternative or a narrow set of alternatives are selected, it will then be more cost-effective to perform the detailed risk analysis necessary for programmatic and regulatory acceptance of the selected alternative. The RH-RM code has been integrated into the PNNL developed Framework for Risk Analysis In Multimedia Environmental Systems (FRAMES) to allow the input and output data of the RH-RM code to be readily shared with the more comprehensive risk analysis models, such as the PNNL developed Multimedia Environmental Pollutant Assessment System (MEPAS) model.

  19. Methodology for Preliminary Design of Electrical Microgrids

    SciTech Connect (OSTI)

    Jensen, Richard P.; Stamp, Jason E.; Eddy, John P.; Henry, Jordan M; Munoz-Ramos, Karina; Abdallah, Tarek

    2015-09-30

    Many critical loads rely on simple backup generation to provide electricity in the event of a power outage. An Energy Surety Microgrid TM can protect against outages caused by single generator failures to improve reliability. An ESM will also provide a host of other benefits, including integration of renewable energy, fuel optimization, and maximizing the value of energy storage. The ESM concept includes a categorization for microgrid value proposi- tions, and quantifies how the investment can be justified during either grid-connected or utility outage conditions. In contrast with many approaches, the ESM approach explic- itly sets requirements based on unlikely extreme conditions, including the need to protect against determined cyber adversaries. During the United States (US) Department of Defense (DOD)/Department of Energy (DOE) Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) effort, the ESM methodology was successfully used to develop the preliminary designs, which direct supported the contracting, construction, and testing for three military bases. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military installations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Melanie Johnson and Harold Sanborn of the U.S. Army Corps of Engineers Construc- tion Engineering Research Laboratory * Experts from the National Renewable Energy Laboratory, Idaho National Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory

  20. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  1. Shared dosimetry error in epidemiological dose-response analyses

    SciTech Connect (OSTI)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.

  2. Shared dosimetry error in epidemiological dose-response analyses

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less

  3. Sample holder with optical features

    DOE Patents [OSTI]

    Milas, Mirko; Zhu, Yimei; Rameau, Jonathan David

    2013-07-30

    A sample holder for holding a sample to be observed for research purposes, particularly in a transmission electron microscope (TEM), generally includes an external alignment part for directing a light beam in a predetermined beam direction, a sample holder body in optical communication with the external alignment part and a sample support member disposed at a distal end of the sample holder body opposite the external alignment part for holding a sample to be analyzed. The sample holder body defines an internal conduit for the light beam and the sample support member includes a light beam positioner for directing the light beam between the sample holder body and the sample held by the sample support member.

  4. Error-field penetration in reversed magnetic shear configurations

    SciTech Connect (OSTI)

    Wang, H. H.; Wang, Z. X.; Wang, X. Q. [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China)] [MOE Key Laboratory of Materials Modification by Beams of the Ministry of Education, School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, X. G. [School of Physics, Peking University, Beijing 100871 (China)] [School of Physics, Peking University, Beijing 100871 (China)

    2013-06-15

    Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.

  5. Unconventional Rotor Power Response to Yaw Error Variations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schreck, S. J.; Schepers, J. G.

    2014-12-16

    Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.

  6. Seismic acquisition and processing methodologies in overthrust areas: Some examples from Latin America

    SciTech Connect (OSTI)

    Tilander, N.G.; Mitchel, R..

    1996-08-01

    Overthrust areas represent some of the last frontiers in petroleum exploration today. Billion barrel discoveries in the Eastern Cordillera of Colombia and the Monagas fold-thrust belt of Venezuela during the past decade have highlighted the potential rewards for overthrust exploration. However the seismic data recorded in many overthrust areas is disappointingly poor. Challenges such as rough topography, complex subsurface structure, presence of high-velocity rocks at the surface, back-scattered energy and severe migration wavefronting continue to lower data quality and reduce interpretability. Lack of well/velocity control also reduces the reliability of depth estimations and migrated images. Failure to obtain satisfactory pre-drill structural images can easily result in costly wildcat failures. Advances in the methodologies used by Chevron for data acquisition, processing and interpretation have produced significant improvements in seismic data quality in Bolivia, Colombia and Trinidad. In this paper, seismic test results showing various swath geometries will be presented. We will also show recent examples of processing methods which have led to improved structural imaging. Rather than focusing on {open_quotes}black box{close_quotes} methodology, we will emphasize the cumulative effect of step-by-step improvements. Finally, the critical significance and interrelation of velocity measurements, modeling and depth migration will be explored. Pre-drill interpretations must ultimately encompass a variety of model solutions, and error bars should be established which realistically reflect the uncertainties in the data.

  7. Estimation of organic carbon blank values and error structures of the speciation trends network data for source apportionment

    SciTech Connect (OSTI)

    Eugene Kim; Philip K. Hopke; Youjun Qin

    2005-08-01

    Because the particulate organic carbon (OC) concentrations reported in U.S. Environment Protection Agency Speciation Trends Network (STN) data were not blank corrected, the OC blank concentrations were estimated using the intercept in particulate matter {lt} 2.5 {mu}m in aerodynamic diameter (PM2.5) regression against OC concentrations. The estimated OC blank concentrations ranged from 1 to 2.4 {mu}g/m{sup 3} showing higher values in urban areas for the 13 monitoring sites in the northeastern United States. In the STN data, several different samplers and analyzers are used, and various instruments show different method detection limit (MDL) values, as well as errors. A comprehensive set of error structures that would be used for numerous source apportionment studies of STN data was estimated by comparing a limited set of measured concentrations and their associated uncertainties. To examine the estimated error structures and investigate the appropriate MDL values, PM2.5 samples collected at a STN site in Burlington, VT, were analyzed through the application of the positive matrix factorization. A total of 323 samples that were collected between December 2000 and December 2003 and 49 species based on several variable selection criteria were used, and eight sources were successfully identified in this study with the estimated error structures and min values among different MDL values from the five instruments: secondary sulfate aerosol (41%) identified as the result of emissions from coal-fired power plants, secondary nitrate aerosol (20%), airborne soil (15%), gasoline vehicle emissions (7%), diesel emissions (7%), aged sea salt (4%), copper smelting (3%), and ferrous smelting (2%). Time series plots of contributions from airborne soil indicate that the highly elevated impacts from this source were likely caused primarily by dust storms.

  8. SU-E-T-374: Sensitivity of ArcCHECK to Tomotherapy Delivery Errors: Dependence On Analysis Technique

    SciTech Connect (OSTI)

    Templeton, A; Chu, J; Turian, J

    2014-06-01

    Purpose: ArcCHECK (Sun Nuclear) is a cylindrical diode array detector allowing three-dimensional sampling of dose, particularly useful in treatment delivery QA of helical tomotherapy. Gamma passing rate is a common method of analyzing results from diode arrays, but is less intuitive in 3D with complex measured dose distributions. This study explores the sensitivity of gamma passing rate to choice of analysis technique in the context of its ability to detect errors introduced into the treatment delivery. Methods: Nine treatment plans were altered to introduce errors in: couch speed, gantry/sonogram synchronization, and leaf open time. Each plan was then delivered to ArcCHECK in each of the following arrangements: offset, when the high dose area of the plan is delivered to the side of the phantom so that some diode measurements will be on the order of the prescription dose, and centered, when the high dose is in the center of the phantom where an ion chamber measurement may be acquired, but the diode measurements are in the mid to low-dose region at the periphery of the plan. Gamma analysis was performed at 3%/3mm tolerance and both global and local gamma criteria. The threshold of detectability for each error type was calculated as the magnitude at which the gamma passing rate drops below 90%. Results: Global gamma criteria reduced the sensitivity in the offset arrangement (from 2.3% to 4.5%, 8 to 21, and 3ms to 8ms for couch-speed decrease, gantry-error, and leaf-opening increase, respectively). The centered arrangement detected changes at 3.3%, 5, and 4ms with smaller variation. Conclusion: Each arrangement has advantages; offsetting allows more sampling of the higher dose region, while centering allows an ion chamber measurement and potentially better use of tools such as 3DVH, at the cost of positioning more of the diodes in the sometimes noisy mid-dose region.

  9. Tularosa Basin Play Fairway Analysis: Methodology Flow Charts

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Adam Brandt

    2015-11-15

    These images show the comprehensive methodology used for creation of a Play Fairway Analysis to explore the geothermal resource potential of the Tularosa Basin, New Mexico. The deterministic methodology was originated by the petroleum industry, but was custom-modified to function as a knowledge-based geothermal exploration tool. The stochastic PFA flow chart uses weights of evidence, and is data-driven.

  10. Sample Preparation Report of the Fourth OPCW Confidence Building Exercise on Biomedical Sample Analysis

    SciTech Connect (OSTI)

    Udey, R. N.; Corzett, T. H.; Alcaraz, A.

    2014-07-03

    Following the successful completion of the 3rd biomedical confidence building exercise (February 2013 – March 2013), which included the analysis of plasma and urine samples spiked at low ppb levels as part of the exercise scenario, another confidence building exercise was targeted to be conducted in 2014. In this 4th exercise, it was desired to focus specifically on the analysis of plasma samples. The scenario was designed as an investigation of an alleged use of chemical weapons where plasma samples were collected, as plasma has been reported to contain CWA adducts which remain present in the human body for several weeks (Solano et al. 2008). In the 3rd exercise most participants used the fluoride regeneration method to analyze for the presence of nerve agents in plasma samples. For the 4th biomedical exercise it was decided to evaluate the analysis of human plasma samples for the presence/absence of the VX adducts and aged adducts to blood proteins (e.g., VX-butyrylcholinesterase (BuChE) and aged BuChE adducts using a pepsin digest technique to yield nonapeptides; or equivalent). As the aging of VX-BuChE adducts is relatively slow (t1/2 = 77 hr at 37 °C [Aurbek et al. 2009]), soman (GD), which ages much more quickly (t1/2 = 9 min at 37 °C [Masson et al. 2010]), was used to simulate an aged VX sample. Additional objectives of this exercise included having laboratories assess novel OP-adducted plasma sample preparation techniques and analytical instrumentation methodologies, as well as refining/designating the reporting formats for these new techniques.

  11. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect (OSTI)

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  12. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  13. V-194: Citrix XenServer Memory Management Error Lets Local Administrat...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    XenServer Memory Management Error Lets Local Administrative Users on the Guest Gain Access on the Host V-194: Citrix XenServer Memory Management Error Lets Local Administrative...

  14. FlipSphere: A Software-based DRAM Error Detection and Correction...

    Office of Scientific and Technical Information (OSTI)

    FlipSphere: A Software-based DRAM Error Detection and Correction Library for HPC. Citation Details In-Document Search Title: FlipSphere: A Software-based DRAM Error Detection and ...

  15. Radiochemical Analysis Methodology for uranium Depletion Measurements

    SciTech Connect (OSTI)

    Scatena-Wachel DE

    2007-01-09

    This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

  16. Sampling Report for May-June, 2014 WIPP Samples

    Office of Environmental Management (EM)

    UNCLASSIFIED iii Table of Contents WIPP Panel 7 Sampling May-June, 2014 ... 15 LLNL-TR-667001 Section 1 WIPP Panel 7 Sampling May-June, 2014 1. Summary of ...

  17. Resolved: "error while loading shared libraries: libalpslli.so.0" with

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    serial codes on login nodes "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes December 13, 2013 by Helen He Symptom: Dynamic executables built with compiler wrappers running directly on the external login nodes are getting the following error message: % ftn -dynamic -o testf testf.f % ./testf ./testf: error while loading shared

  18. T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service

    Broader source: Energy.gov [DOE]

    A remote user can cause the backend server to remain in an error state until the retry timeout expires.

  19. The Carnegie-Spitzer-IMACS redshift survey of galaxy evolution since z = 1.5. I. Description and methodology

    SciTech Connect (OSTI)

    Kelson, Daniel D.; Williams, Rik J.; Dressler, Alan; McCarthy, Patrick J.; Shectman, Stephen A.; Mulchaey, John S.; Villanueva, Edward V.; Crane, Jeffrey D.; Quadri, Ryan F.

    2014-03-10

    We describe the Carnegie-Spitzer-IMACS (CSI) Survey, a wide-field, near-IR selected spectrophotometric redshift survey with the Inamori Magellan Areal Camera and Spectrograph (IMACS) on Magellan-Baade. By defining a flux-limited sample of galaxies in Spitzer Infrared Array Camera 3.6 μm imaging of SWIRE fields, the CSI Survey efficiently traces the stellar mass of average galaxies to z ∼ 1.5. This first paper provides an overview of the survey selection, observations, processing of the photometry and spectrophotometry. We also describe the processing of the data: new methods of fitting synthetic templates of spectral energy distributions are used to derive redshifts, stellar masses, emission line luminosities, and coarse information on recent star formation. Our unique methodology for analyzing low-dispersion spectra taken with multilayer prisms in IMACS, combined with panchromatic photometry from the ultraviolet to the IR, has yielded high-quality redshifts for 43,347 galaxies in our first 5.3 deg{sup 2} of the SWIRE XMM-LSS field. We use three different approaches to estimate our redshift errors and find robust agreement. Over the full range of 3.6 μm fluxes of our selection, we find typical redshift uncertainties of σ {sub z}/(1 + z) ≲ 0.015. In comparisons with previously published spectroscopic redshifts we find scatters of σ {sub z}/(1 + z) = 0.011 for galaxies at 0.7 ≤ z ≤ 0.9, and σ {sub z}/(1 + z) = 0.014 for galaxies at 0.9 ≤ z ≤ 1.2. For galaxies brighter and fainter than i = 23 mag, we find σ {sub z}/(1 + z) = 0.008 and σ {sub z}/(1 + z) = 0.022, respectively. Notably, our low-dispersion spectroscopy and analysis yields comparable redshift uncertainties and success rates for both red and blue galaxies, largely eliminating color-based systematics that can seriously bias observed dependencies of galaxy evolution on environment.

  20. Northern Marshall Islands radiological survey: sampling and analysis summary

    SciTech Connect (OSTI)

    Robison, W.L.; Conrado, C.L.; Eagle, R.J.; Stuart, M.L.

    1981-07-23

    A radiological survey was conducted in the Northern Marshall Islands to document reamining external gamma exposures from nuclear tests conducted at Enewetak and Bikini Atolls. An additional program was later included to obtain terrestrial and marine samples for radiological dose assessment for current or potential atoll inhabitants. This report is the first of a series summarizing the results from the terrestrial and marine surveys. The sample collection and processing procedures and the general survey methodology are discussed; a summary of the collected samples and radionuclide analyses is presented. Over 5400 samples were collected from the 12 atolls and 2 islands and prepared for analysis including 3093 soil, 961 vegetation, 153 animal, 965 fish composite samples (average of 30 fish per sample), 101 clam, 50 lagoon water, 15 cistern water, 17 groundwater, and 85 lagoon sediment samples. A complete breakdown by sample type, atoll, and island is given here. The total number of analyses by radionuclide are 8840 for /sup 241/Am, 6569 for /sup 137/Cs, 4535 for /sup 239 +240/Pu, 4431 for /sup 90/Sr, 1146 for /sup 238/Pu, 269 for /sup 241/Pu, and 114 each for /sup 239/Pu and /sup 240/Pu. A complete breakdown by sample category, atoll or island, and radionuclide is also included.

  1. Sampling Report for August 15, 2014 WIPP Samples

    Office of Environmental Management (EM)

    0 L L N L - X X X X - X X X X X Sampling Report for August 15, 2014 WIPP Samples UNCLASSIFIED Forensic Science Center December 19, 2014 Sampling Report for August 15 2014 WIPP Samples Lawrence Livermore National Laboratory UNCLASSIFIED ii Disclaimer This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or

  2. Specified assurance level sampling procedure

    SciTech Connect (OSTI)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level.

  3. SNS Sample Activation Calculator Flux Recommendations and Validation

    SciTech Connect (OSTI)

    McClanahan, Tucker C.; Gallmeier, Franz X.; Iverson, Erik B.; Lu, Wei

    2015-02-01

    The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.

  4. Natural gas production problems : solutions, methodologies, and modeling.

    SciTech Connect (OSTI)

    Rautman, Christopher Arthur; Herrin, James M.; Cooper, Scott Patrick; Basinski, Paul M.; Olsson, William Arthur; Arnold, Bill Walter; Broadhead, Ronald F.; Knight, Connie D.; Keefe, Russell G.; McKinney, Curt; Holm, Gus; Holland, John F.; Larson, Rich; Engler, Thomas W.; Lorenz, John Clay

    2004-10-01

    Natural gas is a clean fuel that will be the most important domestic energy resource for the first half the 21st centtuy. Ensuring a stable supply is essential for our national energy security. The research we have undertaken will maximize the extractable volume of gas while minimizing the environmental impact of surface disturbances associated with drilling and production. This report describes a methodology for comprehensive evaluation and modeling of the total gas system within a basin focusing on problematic horizontal fluid flow variability. This has been accomplished through extensive use of geophysical, core (rock sample) and outcrop data to interpret and predict directional flow and production trends. Side benefits include reduced environmental impact of drilling due to reduced number of required wells for resource extraction. These results have been accomplished through a cooperative and integrated systems approach involving industry, government, academia and a multi-organizational team within Sandia National Laboratories. Industry has provided essential in-kind support to this project in the forms of extensive core data, production data, maps, seismic data, production analyses, engineering studies, plus equipment and staff for obtaining geophysical data. This approach provides innovative ideas and technologies to bring new resources to market and to reduce the overall environmental impact of drilling. More importantly, the products of this research are not be location specific but can be extended to other areas of gas production throughout the Rocky Mountain area. Thus this project is designed to solve problems associated with natural gas production at developing sites, or at old sites under redevelopment.

  5. Coordinated joint motion control system with position error correction

    DOE Patents [OSTI]

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  6. Coordinated joint motion control system with position error correction

    DOE Patents [OSTI]

    Danko, George

    2011-11-22

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  7. Error field penetration and locking to the backward propagating wave

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  8. Error field penetration and locking to the backward propagating wave

    SciTech Connect (OSTI)

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  9. Groundwater Sampling | Open Energy Information

    Open Energy Info (EERE)

    500 mL), whereas analysis for stable isotopes that are present in greater abundance in natural samples requires less water to be sampled by a full order of magnitude (approximately...

  10. Hydrogen Program Goal-Setting Methodologies Report to Congress

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    6 Hydrogen Program Goal-Setting Methodologies Report to Congress (ESECS EE-4015) Hydrogen Program Goal-Setting Methodologies (This page intentionally left blank) 8/7/2006 - 2 - Hydrogen Program Goal-Setting Methodologies Introduction This report addresses section 1819 of Public Law 109-58, also referred to as the Energy Policy Act of 2005. Section 1819 states: "Not later than 1 year after the date of enactment of this Act, the Secretary shall submit to Congress a report evaluating

  11. Geoscience Laboratory | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    preparation and other relatively straight-forward laboratory manipulations. These include buffer preparations, solid sample grinding, solution concentration, filtration, and...

  12. Improved Characterization of Transmitted Wavefront Error on CADB Epoxy-Free Bonded Solid State Laser Materials

    SciTech Connect (OSTI)

    Bayramian, A

    2010-12-09

    Current state-of-the-art and next generation laser systems - such as those used in the NIF and LIFE experiments at LLNL - depend on ever larger optical elements. The need for wide aperture optics that are tolerant of high power has placed many demands on material growers for such diverse materials as crystalline sapphire, quartz, and laser host materials. For such materials, it is either prohibitively expensive or even physically impossible to fabricate monolithic pieces with the required size. In these cases, it is preferable to optically bond two or more elements together with a technique such as Chemically Activated Direct Bonding (CADB{copyright}). CADB is an epoxy-free bonding method that produces bulk-strength bonded samples with negligible optical loss and excellent environmental robustness. The authors have demonstrated CADB for a variety of different laser glasses and crystals. For this project, they will bond quartz samples together to determine the suitability of the resulting assemblies for large aperture high power laser optics. The assemblies will be evaluated in terms of their transmitted wavefront error, and other optical properties.

  13. Methodology for the characterization and management of nonpoint source water pollution. Master's thesis

    SciTech Connect (OSTI)

    Praner, D.M.; Sprewell, G.M.

    1992-09-01

    The purpose of this research was development of a methodology for characterization and management of Nonpoint Source (NPS) water pollution. Section 319 of the 1987 Water Quality Act requires states to develop management programs for reduction of NPS pollution via Best Management Practices (BMPs). Air Force installations are expected to abide by federal, state, and local environmental regulations. Currently, the Air Force does not have a methodology to identify and quantify NPS pollution, or a succinct catalog of BMPs. Air Force installation managers need a package to assist them in meeting legislative and regulatory requirements associated with NPS pollution. Ten constituents characteristic of urban runoff were identified in the Nationwide Urban Runoff Program (NURP) and selected as those constituents of concern for modeling and sampling. Two models were used and compared with the results of a sampling and analysis program. Additionally, a compendium of BMPs was developed.... Nonpoint Source Pollution (NPS), Best Management Practices (BMPs), Water pollution, Water sampling and analysis, Stormwater runoff modeling, NPDES.

  14. Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 0

    SciTech Connect (OSTI)

    Marutzky, Sam; Farnham, Irene

    2014-10-01

    The purpose of the Nevada National Security Site (NNSS) Integrated Sampling Plan (referred to herein as the Plan) is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP). The Plan’s scope comprises sample collection and analysis requirements relevant to assessing the extent of groundwater contamination from underground nuclear testing. This Plan identifies locations to be sampled by corrective action unit (CAU) and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well-purging requirements, detection levels, and accuracy requirements; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling of interest to UGTA. This Plan does not address compliance with requirements for wells that supply the NNSS public water system or wells involved in a permitted activity.

  15. Status of Activities to Implement a Sustainable System of MC&A Equipment and Methodological Support at Rosatom Facilities

    SciTech Connect (OSTI)

    J.D. Sanders

    2010-07-01

    Under the U.S.-Russian Material Protection, Control and Accounting (MPC&A) Program, the Material Control and Accounting Measurements (MCAM) Project has supported a joint U.S.-Russian effort to coordinate improvements of the Russian MC&A measurement system. These efforts have resulted in the development of a MC&A Equipment and Methodological Support (MEMS) Strategic Plan (SP), developed by the Russian MEM Working Group. The MEMS SP covers implementation of MC&A measurement equipment, as well as the development, attestation and implementation of measurement methodologies and reference materials at the facility and industry levels. This paper provides an overview of the activities conducted under the MEMS SP, as well as a status on current efforts to develop reference materials, implement destructive and nondestructive assay measurement methodologies, and implement sample exchange, scrap and holdup measurement programs across Russian nuclear facilities.

  16. Biopower Report Presents Methodology for Assessing the Value...

    Energy Savers [EERE]

    Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants November 20, 2014 - 12:22pm ...

  17. DOE 2009 Geothermal Risk Analysis: Methodology and Results (Presentation)

    SciTech Connect (OSTI)

    Young, K. R.; Augustine, C.; Anderson, A.

    2010-02-01

    This presentation summarizes the methodology and results for a probabilistic risk analysis of research, development, and demonstration work-primarily for enhanced geothermal systems (EGS)-sponsored by the U.S. Department of Energy Geothermal Technologies Program.

  18. Proposed Methodology for LEED Baseline Refrigeration Modeling (Presentation)

    SciTech Connect (OSTI)

    Deru, M.

    2011-02-01

    This PowerPoint presentation summarizes a proposed methodology for LEED baseline refrigeration modeling. The presentation discusses why refrigeration modeling is important, the inputs of energy models, resources, reference building model cases, baseline model highlights, example savings calculations and results.

  19. Spent fuel management fee methodology and computer code user's manual.

    SciTech Connect (OSTI)

    Engel, R.L.; White, M.K.

    1982-01-01

    The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

  20. Hanford Site baseline risk assessment methodology. Revision 2

    SciTech Connect (OSTI)

    Not Available

    1993-03-01

    This methodology has been developed to prepare human health and environmental evaluations of risk as part of the Comprehensive Environmental Response, Compensation, and Liability Act remedial investigations (RIs) and the Resource Conservation and Recovery Act facility investigations (FIs) performed at the Hanford Site pursuant to the Hanford Federal Facility Agreement and Consent Order referred to as the Tri-Party Agreement. Development of the methodology has been undertaken so that Hanford Site risk assessments are consistent with current regulations and guidance, while providing direction on flexible, ambiguous, or undefined aspects of the guidance. The methodology identifies Site-specific risk assessment considerations and integrates them with approaches for evaluating human and environmental risk that can be factored into the risk assessment program supporting the Hanford Site cleanup mission. Consequently, the methodology will enhance the preparation and review of individual risk assessments at the Hanford Site.

  1. Hydrogen Goal-Setting Methodologies Report to Congress

    Fuel Cell Technologies Publication and Product Library (EERE)

    DOE's Hydrogen Goal-Setting Methodologies Report to Congress summarizes the processes used to set Hydrogen Program goals and milestones. Published in August 2006, it fulfills the requirement under se

  2. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  3. Single point aerosol sampling: Evaluation of mixing and probe performance in a nuclear stack

    SciTech Connect (OSTI)

    Rodgers, J.C.; Fairchild, C.I.; Wood, G.O.; Ortiz, C.A.; Muyshondt, A.

    1996-01-01

    Alternative reference methodologies have been developed for sampling of radionuclides from stacks and ducts, which differ from the methods previously required by the United States Environmental Protection Agency. These alternative reference methodologies have recently been approved by the U.S. EPA for use in lieu of the current standard techniques. The standard EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative reference methodologies are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of some aspects of the alternative reference methodologies. Coefficients of variation of velocity, tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed that numerical criteria placed upon the coefficients of variation by the alternative reference methodologies were met at sampling stations located 9 and 14 stack diameters from the flow entrance, but not at a location that was 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 {mu}m aerodynamic diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L min{sup {minus}1} (4-cfm) anistokinetic shrouded probe, but only 20% for an isokinetic probe that follows the existing EPA standard requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the alternative reference methodologies criteria; however, the isokinetic probes would not. 13 refs., 9 figs., 1 tab.

  4. Single-point representative sampling with shrouded probes

    SciTech Connect (OSTI)

    McFarland, A.R.; Rodgers, J.C.

    1993-08-01

    The Environmental Protection Agency (EPA) prescribed methodologies for sampling radionuclides in air effluents from stacks and ducts at US Department of Energy (DOE) facilities. Requirements include use of EPA Method 1 for the location of sampling sites and use of American National Standards Institute (ANSI) N13.1 for guidance in design of sampling probes and the number of probes at a given site. Application of ANSI N13.1 results in sampling being performed with multiprobe rakes that have as many as 20 probes. There can be substantial losses of aerosol particles in such sampling that will degrade the quality of emission estimates from a nuclear facility. Three alternate methods, technically justified herein, are proposed for effluent sampling. First, a shrouded aerosol sampling probe should replace the sharp-edged elbowed-nozzle recommended by ANSI. This would reduce the losses of aerosol particles in probes and result in the acquisition of more representative aerosol samples. Second, the rakes of multiple probes that are intended to acquire representative samples through spatial coverage should be replaced by a single probe located where contaminant mass and fluid momentum are both well mixed. A representative sample can be obtained from a well-mixed flow. Some effluent flows will need to be engineered to achieve acceptable mixing. Third, sample extraction should be performed at a constant flow rate through a suitable designed shrouded probe rather than at a variable flow rate through isokinetic probes. A shrouded probe is shown to have constant sampling characteristics over a broad range of stack velocities when operated at a fixed flow rate.

  5. New Methodologies for Analysis of Premixed Charge Compression Ignition

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Engines | Department of Energy New Methodologies for Analysis of Premixed Charge Compression Ignition Engines New Methodologies for Analysis of Premixed Charge Compression Ignition Engines Presentation given at the 2007 Diesel Engine-Efficiency & Emissions Research Conference (DEER 2007). 13-16 August, 2007, Detroit, Michigan. Sponsored by the U.S. Department of Energy's (DOE) Office of FreedomCAR and Vehicle Technologies (OFCVT). deer07_aceves.pdf (1012.81 KB) More Documents &

  6. Synthesizing Membrane Proteins Using In Vitro Methodology | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Laboratory Membrane Proteins Using In Vitro Methodology Technology available for licensing: in vitro, cell-free expression system that caters to the production of protein types that are challenging to study: membrane proteins, membrane-associated proteins, and soluble proteins that require complex redox cofactors. A cell-free, in vitro protein synthesis method for targeting difficult-to-study proteins Quicker and easier than conventional methodologies, this system does not require

  7. Technical report on LWR design decision methodology. Phase I

    SciTech Connect (OSTI)

    None

    1980-03-01

    Energy Incorporated (EI) was selected by Sandia Laboratories to develop and test on LWR design decision methodology. Contract Number 42-4229 provided funding for Phase I of this work. This technical report on LWR design decision methodology documents the activities performed under that contract. Phase I was a short-term effort to thoroughly review the curret LWR design decision process to assure complete understanding of current practices and to establish a well defined interface for development of initial quantitative design guidelines.

  8. Sampling Report for May-June, 2014 WIPP Samples

    Office of Environmental Management (EM)

    1 L L N L - X X X X - X X X X X Sampling Report for May- June, 2014 WIPP Samples UNCLASSIFIED Forensic Science Center January 8, 2015 Sampling Report for May-June, 2014 WIPP Samples Lawrence Livermore National Laboratory UNCLASSIFIED ii Disclaimer This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or

  9. A methodology for evaluating air pollution strategies to improve the air quality in Mexico City

    SciTech Connect (OSTI)

    Barrera-Roldan, A.S.; Guzman, F.; Hardie, R.W.; Thayer, G.R.

    1995-05-01

    The Mexico City Air Quality Research Initiative has developed a methodology to assist decision makers in determining optimum pollution control strategies for atmospheric pollutants. The methodology introduces both objective and subjective factors in the comparison of various strategies for improving air quality. Strategies or group of options are first selected using linear programming. These strategies are then compared using Multi-Attribute Decision Analysis. The decision tree for the Multi-Attribute Decision Analysis was generated by a panel of experts representing the organizations in Mexico that are responsible for formulating policy on air quality improvement. Three sample strategies were analyzed using the methodology: one to reduce ozone by 33% using the most cost effective group of options, the second to reduce ozone by 43% using the most cost effective group of options and the third to reduce ozone by 43% emphasizing the reduction of emissions from industrial sources. Of the three strategies, the analysis indicated that strategy 2 would be the preferred strategy for improving air quality in Mexico City.

  10. Tornado missile simulation and design methodology. Volume 1: simulation methodology, design applications, and TORMIS computer code. Final report

    SciTech Connect (OSTI)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and risk has been assessed for a hypothetical nuclear power plant design case study.

  11. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect (OSTI)

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  12. In vivo enzyme activity in inborn errors of metabolism

    SciTech Connect (OSTI)

    Thompson, G.N.; Walter, J.H.; Leonard, J.V.; Halliday, D. )

    1990-08-01

    Low-dose continuous infusions of (2H5)phenylalanine, (1-13C)propionate, and (1-13C)leucine were used to quantitate phenylalanine hydroxylation in phenylketonuria (PKU, four subjects), propionate oxidation in methylmalonic acidaemia (MMA, four subjects), and propionic acidaemia (PA, four subjects) and leucine oxidation in maple syrup urine disease (MSUD, four subjects). In vivo enzyme activity in PKU, MMA, and PA subjects was similar to or in excess of that in adult controls (range of phenylalanine hydroxylation in PKU, 3.7 to 6.5 mumol/kg/h, control 3.2 to 7.9, n = 7; propionate oxidation in MMA, 15.2 to 64.8 mumol/kg/h, and in PA, 11.1 to 36.0, control 5.1 to 19.0, n = 5). By contrast, in vivo leucine oxidation was undetectable in three of the four MSUD subjects (less than 0.5 mumol/kg/h) and negligible in the remaining subject (2 mumol/kg/h, control 10.4 to 15.7, n = 6). These results suggest that significant substrate removal can be achieved in some inborn metabolic errors either through stimulation of residual enzyme activity in defective enzyme systems or by activation of alternate metabolic pathways. Both possibilities almost certainly depend on gross elevation of substrate concentrations. By contrast, only minimal in vivo oxidation of leucine appears possible in MSUD.

  13. Optimized Sampling Strategies For Non-Proliferation Monitoring: Report

    SciTech Connect (OSTI)

    Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.

    2015-10-20

    Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.

  14. Acceptance sampling using judgmental and randomly selected samples

    SciTech Connect (OSTI)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  15. EMERGING MODALITIES FOR SOIL CARBON ANALYSIS: SAMPLING STATISTICS AND ECONOMICS WORKSHOP.

    SciTech Connect (OSTI)

    WIELOPOLSKI, L.

    2006-04-01

    The workshop's main objectives are (1) to present the emerging modalities for analyzing carbon in soil, (2) to assess their error propagation, (3) to recommend new protocols and sampling strategies for the new instrumentation, and, (4) to compare the costs of the new methods with traditional chemical ones.

  16. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    SciTech Connect (OSTI)

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  17. Sample page | Open Energy Information

    Open Energy Info (EERE)

    Sample pages; Help pages; References Francis C. Monastero. 2002. An overview of industry-military cooperation in the development of power operations at the Coso...

  18. Chemical Resources | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Chemical Resources Chemical Inventory All Sample Preparation Labs are stocked with an assortment of common solvents, acids, bases, buffers, and other reagents. See our Chemical ...

  19. Gas Sampling | Open Energy Information

    Open Energy Info (EERE)

    of geothermometric calculations and geochemical modeling of the data. In the case of gas flux sampling, different measurement techniques and devices may disrupt or alter the...

  20. Numerical study of the effect of normalised window size, sampling frequency, and noise level on short time Fourier transform analysis

    SciTech Connect (OSTI)

    Ota, T. A.

    2013-10-15

    Photonic Doppler velocimetry, also known as heterodyne velocimetry, is a widely used optical technique that requires the analysis of frequency modulated signals. This paper describes an investigation into the errors of short time Fourier transform analysis. The number of variables requiring investigation was reduced by means of an equivalence principle. Error predictions, as the number of cycles, samples per cycle, noise level, and window type were varied, are presented. The results were found to be in good agreement with analytical models.

  1. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect (OSTI)

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  2. Environmental surveillance master sampling schedule

    SciTech Connect (OSTI)

    Bisping, L.E.

    1995-02-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program.

  3. Estimation of retired mobile phones generation in China: A comparative study on methodology

    SciTech Connect (OSTI)

    Li, Bo; Yang, Jianxin; Lu, Bin; Song, Xiaolong

    2015-01-15

    Highlights: • The sales data of mobile phones in China was revised by considering the amount of smuggled and counterfeit mobile phones. • The estimation of retired mobile phones in China was made by comparing some relevant methods. • The advanced result of estimation can help improve the policy-making. • The method suggested in this paper can be also used in other countries. • Some discussions on methodology are also conducted in order for the improvement. - Abstract: Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales and new method is in the highest priority in estimation of the retired mobile phones. The result of sales and new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to

  4. The impact of response measurement error on the analysis of designed experiments

    SciTech Connect (OSTI)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

  5. The impact of response measurement error on the analysis of designed experiments

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  6. Nonlocal reactive transport with physical and chemical heterogeneity: Localization errors

    SciTech Connect (OSTI)

    Cushman, J.H.; Hu, B.X.; Deng, F.W.

    1995-09-01

    The origin of nonlocality in {open_quotes}macroscale{close_quotes} models for subsurface chemical transport is illustrated. It is argued that media that are either nonperiodic (e.g., media with evolving heterogeneity) or periodic viewed on a scale wherein a unit cell is discernible must display some nonlocality in the mean. A metaphysical argument suggests that owing to the scarcity of information on natural scales of heterogeneity and on scales of observation associated with an instrument window, constitutive theories for the mean concentration should at the outset of any modeling effort always be considered nonlocal. The intuitive appeal to nonlocality is reinforced with an analytical derivation of the constitutive theory for a conservative tracer without appeal to any mathematical approximations. Comparisons are made between the fully nonlocal (FNL), nonlocal in time (NLT), and fully localized (FL) theories. For conservative transport, there is little difference between the first-order FL and FNL models for spatial moments up to and including the third. However, for conservative transport the first-order NLT model differs significantly from the FNL model in the third spatial moments. For reactive transport, all spatial moments differ between the FNL and FL models. The second transverse-horizontal and third longitudinal-horizontal moments for the NLT model differ from the FNL model. These results suggest that localized first-order transport models for conservative tracers are reasonable if only lower-order moments are desired. However, when the chemical reacts with its environment, the localization approximation can lead to significant error in all moments, and a FNL model will in general be required for accurate simulation. 18 refs., 9 figs., 1 tab.

  7. 200 area TEDF sample schedule

    SciTech Connect (OSTI)

    Brown, M.J.

    1995-03-22

    This document summarizes the sampling criteria associated with the 200 Area Treatment Effluent Facility (TEDF) that are needed to comply with the requirements of the Washington State Discharge Permit No. WA ST 4502 and good engineering practices at the generator streams that feed into TEDF. In addition, this document Identifies the responsible parties for both sampling and data transference.

  8. Sample push-out fixture

    DOE Patents [OSTI]

    Biernat, John L.

    2002-11-05

    This invention generally relates to the remote removal of pelletized samples from cylindrical containment capsules. V-blocks are used to receive the samples and provide guidance to push out rods. Stainless steel liners fit into the v-channels on the v-blocks which permits them to be remotely removed and replaced or cleaned to prevent cross contamination between capsules and samples. A capsule holder securely holds the capsule while allowing manual up/down and in/out movement to align each sample hole with the v-blocks. Both end sections contain identical v-blocks; one that guides the drive out screw and rods or manual push out rods and the other to receive the samples as they are driven out of the capsule.

  9. Method and apparatus for detecting timing errors in a system oscillator

    DOE Patents [OSTI]

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  10. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOE Patents [OSTI]

    Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  11. V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service Against Recursive Resolvers

    Broader source: Energy.gov [DOE]

    A defect exists which allows an attacker to crash a BIND 9 recursive resolver with a RUNTIME_CHECK error in resolver.c

  12. Table 4b. Relative Standard Errors for Total Fuel Oil Consumption...

    Gasoline and Diesel Fuel Update (EIA)

    4b. Relative Standard Errors for Total Fuel Oil Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Fuel Oil (thousand) Total Fuel Oil...

  13. An Integrated Safety Assessment Methodology for Generation IV Nuclear Systems

    SciTech Connect (OSTI)

    Timothy J. Leahy

    2010-06-01

    The Generation IV International Forum (GIF) Risk and Safety Working Group (RSWG) was created to develop an effective approach for the safety of Generation IV advanced nuclear energy systems. Early work of the RSWG focused on defining a safety philosophy founded on lessons learned from current and prior generations of nuclear technologies, and on identifying technology characteristics that may help achieve Generation IV safety goals. More recent RSWG work has focused on the definition of an integrated safety assessment methodology for evaluating the safety of Generation IV systems. The methodology, tentatively called ISAM, is an integrated toolkit consisting of analytical techniques that are available and matched to appropriate stages of Generation IV system concept development. The integrated methodology is intended to yield safety-related insights that help actively drive the evolving design throughout the technology development cycle, potentially resulting in enhanced safety, reduced costs, and shortened development time.

  14. Sample Preparation Laboratory Training - Course 204 | Sample Preparation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Laboratories Sample Preparation Laboratory Training - Course 204 Who Should Attend This course is mandatory for: SLAC employees and non-employees who need unescorted access to SSRL or LCLS Sample Preparation Laboratories Note: This course may be taken in lieu of Course 199, Laboratory CHP training for SLAC employees. Prerequisites 115 - General Employee Radiological Training (GERT) Take Training Please see the notes section below for information on how to take this training. Course Details

  15. NSTP 2002-2 Methodology for Final Hazard Categorization for Nuclear...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    NSTP 2002-2 Methodology for Final Hazard Categorization for Nuclear Facilities from Category 3 to Radiological (111302). NSTP 2002-2 Methodology for Final Hazard Categorization ...

  16. Liquid scintillation counting methodology for 99Tc analysis. A remedy for radiopharmaceutical waste

    SciTech Connect (OSTI)

    Khan, Mumtaz; Um, Wooyong

    2015-08-13

    This paper presents a new approach for liquid scintillation counting (LSC) analysis of single-radionuclide samples containing appreciable organic or inorganic quench. This work offers better analytical results than existing LSC methods for technetium-99 (99gTc) analysis with significant savings in analysis cost and time. The method was developed to quantify 99gTc in environmental liquid and urine samples using LSC. Method efficiency was measured in the presence of 1.9 to 11,900 ppm total dissolved solids. The quench curve was proved to be effective in the case of spiked 99gTc activity calculation for deionized water, tap water, groundwater, seawater, and urine samples. Counting efficiency was found to be 91.66% for Ultima Gold LLT (ULG-LLT) and Ultima Gold (ULG). Relative error in spiked 99gTc samples was ±3.98% in ULG and ULG-LLT cocktails. Minimum detectable activity was determined to be 25.3 mBq and 22.7 mBq for ULG-LLT and ULG cocktails, respectively. A pre-concentration factor of 1000 was achieved at 100°C for 100% chemical recovery.

  17. Environmental surveillance master sampling schedule

    SciTech Connect (OSTI)

    Bisping, L.E.

    1991-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. The routine sampling plan for the SESP has been revised this year to reflect changing site operations and priorities. Some sampling previously performed at least annually has been reduced in frequency, and some new sampling to be performed at a less than annual frequency has been added. Therefore, the SESP schedule reflects sampling to be conducted in calendar year 1991 as well as future years. The ground-water sampling schedule is for 1991. This schedule is subject to modification during the year in response to changes in Site operation, program requirements, and the nature of the observed results. Operational limitations such as weather, mechanical failures, sample availability, etc., may also require schedule modifications. Changes will be documented in the respective project files, but this plan will not be reissued. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford evirons.

  18. Duplex sampling apparatus and method

    DOE Patents [OSTI]

    Brown, Paul E.; Lloyd, Robert

    1992-01-01

    An improved apparatus is provided for sampling a gaseous mixture and for measuring mixture components. The apparatus includes two sampling containers connected in series serving as a duplex sampling apparatus. The apparatus is adapted to independently determine the amounts of condensable and noncondensable gases in admixture from a single sample. More specifically, a first container includes a first port capable of selectively connecting to and disconnecting from a sample source and a second port capable of selectively connecting to and disconnecting from a second container. A second container also includes a first port capable of selectively connecting to and disconnecting from the second port of the first container and a second port capable of either selectively connecting to and disconnecting from a differential pressure source. By cooling a mixture sample in the first container, the condensable vapors form a liquid, leaving noncondensable gases either as free gases or dissolved in the liquid. The condensed liquid is heated to drive out dissolved noncondensable gases, and all the noncondensable gases are transferred to the second container. Then the first and second containers are separated from one another in order to separately determine the amount of noncondensable gases and the amount of condensable gases in the sample.

  19. Technical bases and guidance for the use of composite soil sampling for demonstrating compliance with radiological release criteria

    SciTech Connect (OSTI)

    Vitkus, Timothy J.

    2012-04-24

    This guidance provides information on methodologies and the technical bases that licensees should consider for incorporating composite sampling strategies into final status survey (FSS) plans. In addition, this guidance also includes appropriate uses of composite sampling for generating the data for other decommissioning site investigations such as characterization or other preliminary site investigations.

  20. U.S. Energy-by-Rail Data Methodology

    U.S. Energy Information Administration (EIA) Indexed Site

    by-Rail Data Methodology June 2016 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | U.S. Energy-by-Rail Data Methodology i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are independent of approval by any other officer or employee of the United States

  1. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  2. Sample Business Plan Framework 5

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 5: A program that establishes itself as a government entity, then operates using a fee-based structure.

  3. Sample Business Plan Framework 2

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 1: A program seeking to continue operations in the post-grant period as a not-for-profit (NGO) entity.

  4. Sample Business Plan Framework 4

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 4: A program seeking to continue in the post-grant period as a marketing contractor to a utility.

  5. Sample Business Plan Framework 3

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 3: A government entity running a Commercial PACE program in the post-grant period.

  6. UNDERSTANDING THE AIR SAMPLING DATA

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    For more comparisons, see the millirem comparisons poster. Dose estimates have been calculated based on the low-volume air sampler results. Low-volume air samplers collect samples ...

  7. Air Sampling System Evaluation Template

    Energy Science and Technology Software Center (OSTI)

    2000-05-09

    The ASSET1.0 software provides a template with which a user can evaluate an Air Sampling System against the latest version of ANSI N13.1 "Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities". The software uses the ANSI N13.1 PIC levels to establish basic design criteria for the existing or proposed sampling system. The software looks at such criteria as PIC level, type of radionuclide emissions, physical state ofmore » the radionuclide, nozzle entrance effects, particulate transmission effects, system and component accuracy and precision evaluations, and basic system operations to provide a detailed look at the subsystems of a monitoring and sampling system/program. A GAP evaluation can then be completed which leads to identification of design and operational flaws in the proposed systems. Corrective measures can then be limited to the GAPs.« less

  8. Equipment Inventory | Sample Preparation Laboratories

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    21KBr Centrifuge Centrifuge SSRL BioChemMat Prep Lab 2 131 209 Saint Gobain K-104 Sanyo MIR-154 Cooled Incubator Temperature Control LCLS Sample Prep Lab 999 109 Sanyo MPR-215F...

  9. Sample Business Plan Framework 1

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Better Buildings Neighborhood Program: Sample Business Plan Framework 1: A program seeking to continue operations in the post-grant period as a not-for-profit (NGO) entity.

  10. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    SciTech Connect (OSTI)

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.

  11. Depth-discrete sampling port

    DOE Patents [OSTI]

    Pemberton, Bradley E.; May, Christopher P.; Rossabi, Joseph; Riha, Brian D.; Nichols, Ralph L.

    1999-01-01

    A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.

  12. Depth-discrete sampling port

    DOE Patents [OSTI]

    Pemberton, Bradley E.; May, Christopher P.; Rossabi, Joseph; Riha, Brian D.; Nichols, Ralph L.

    1998-07-07

    A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.

  13. User-Driven Sampling Strategies in Image Exploitation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Harvey, Neal R.; Porter, Reid B.

    2013-12-23

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-drivenmore » sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. We discovered that in user-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. Furthermore, in preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.« less

  14. Sandia software guidelines: Volume 5, Tools, techniques, and methodologies

    SciTech Connect (OSTI)

    Not Available

    1989-07-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. This volume describes software tools and methodologies available to Sandia personnel for the development of software, and outlines techniques that have proven useful within the Laboratories and elsewhere. References and evaluations by Sandia personnel are included. 6 figs.

  15. Methodology for extracting local constants from petroleum cracking flows

    DOE Patents [OSTI]

    Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  16. Methodology for testing metal detectors using variables test data

    SciTech Connect (OSTI)

    Spencer, D.D.; Murray, D.W.

    1993-08-01

    By extracting and analyzing measurement (variables) data from portal metal detectors whenever possible instead of the more typical ``alarm``/``no-alarm`` (attributes or binomial) data, we can be more informed about metal detector health with fewer tests. This testing methodology discussed in this report is an alternative to the typical binomial testing and in many ways is far superior.

  17. Prometheus Reactor I&C Software Development Methodology, for Action

    SciTech Connect (OSTI)

    T. Hamilton

    2005-07-30

    The purpose of this letter is to submit the Reactor Instrumentation and Control (I&C) software life cycle, development methodology, and programming language selections and rationale for project Prometheus to NR for approval. This letter also provides the draft Reactor I&C Software Development Process Manual and Reactor Module Software Development Plan to NR for information.

  18. Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow

    SciTech Connect (OSTI)

    Shadday, Martin A., Jr.

    1997-03-20

    The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated.

  19. A Case for Soft Error Detection and Correction in Computational Chemistry

    SciTech Connect (OSTI)

    van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  20. Compilation error with cray-petsc/3.6.1.0

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Compilation error with cray-petsc3.6.1.0 Compilation error with cray-petsc3.6.1.0 January 5, 2016 The current default cray-petsc module, cray-petsc3.6.1.0, does not work with...

  1. The cce/8.3.0 C++ compiler may run into a linking error on Edison

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The cce8.3.0 C++ compiler may run into a linking error on Edison The cce8.3.0 C++ compiler may run into a linking error on Edison July 1, 2014 You may run into the following...

  2. Ball assisted device for analytical surface sampling

    SciTech Connect (OSTI)

    ElNaggar, Mariam S; Van Berkel, Gary J; Covey, Thomas R

    2015-11-03

    A system for sampling a surface includes a sampling probe having a housing and a socket, and a rolling sampling sphere within the socket. The housing has a sampling fluid supply conduit and a sampling fluid exhaust conduit. The sampling fluid supply conduit supplies sampling fluid to the sampling sphere. The sampling fluid exhaust conduit has an inlet opening for receiving sampling fluid carried from the surface by the sampling sphere. A surface sampling probe and a method for sampling a surface are also disclosed.

  3. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    DOE Patents [OSTI]

    Almasi, Gheorghe [Ardsley, NY; Blumrich, Matthias Augustin [Ridgefield, CT; Chen, Dong [Croton-On-Hudson, NY; Coteus, Paul [Yorktown, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E. [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk I. [Ossining, NY; Singh, Sarabjeet [Mississauga, CA; Steinmacher-Burow, Burkhard D. [Wernau, DE; Takken, Todd [Brewster, NY; Vranas, Pavlos [Bedford Hills, NY

    2008-06-03

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.

  4. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    SciTech Connect (OSTI)

    Zhang, Jing Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  5. Tank characterization technical sampling basis

    SciTech Connect (OSTI)

    Brown, T.M.

    1998-04-28

    Tank Characterization Technical Sampling Basis (this document) is the first step of an in place working process to plan characterization activities in an optimal manner. This document will be used to develop the revision of the Waste Information Requirements Document (WIRD) (Winkelman et al. 1997) and ultimately, to create sampling schedules. The revised WIRD will define all Characterization Project activities over the course of subsequent fiscal years 1999 through 2002. This document establishes priorities for sampling and characterization activities conducted under the Tank Waste Remediation System (TWRS) Tank Waste Characterization Project. The Tank Waste Characterization Project is designed to provide all TWRS programs with information describing the physical, chemical, and radiological properties of the contents of waste storage tanks at the Hanford Site. These tanks contain radioactive waste generated from the production of nuclear weapons materials at the Hanford Site. The waste composition varies from tank to tank because of the large number of chemical processes that were used when producing nuclear weapons materials over the years and because the wastes were mixed during efforts to better use tank storage space. The Tank Waste Characterization Project mission is to provide information and waste sample material necessary for TWRS to define and maintain safe interim storage and to process waste fractions into stable forms for ultimate disposal. This document integrates the information needed to address safety issues, regulatory requirements, and retrieval, treatment, and immobilization requirements. Characterization sampling to support tank farm operational needs is also discussed.

  6. Sample Results from Routine Salt Batch 7 Samples

    SciTech Connect (OSTI)

    Peters, T.

    2015-05-13

    Strip Effluent Hold Tank (SEHT) and Decontaminated Salt Solution Hold Tank (DSSHT) samples from several of the microbatches of Integrated Salt Disposition Project (ISDP) Salt Batch (Macrobatch) 7B have been analyzed for 238Pu, 90Sr, 137Cs, Inductively Coupled Plasma Emission Spectroscopy (ICPES), and Ion Chromatography Anions (IC-A). The results from the current microbatch samples are similar to those from earlier samples from this and previous macrobatches. The Actinide Removal Process (ARP) and the Modular Caustic-Side Solvent Extraction Unit (MCU) continue to show more than adequate Pu and Sr removal, and there is a distinct positive trend in Cs removal, due to the use of the Next Generation Solvent (NGS). The Savannah River National Laboratory (SRNL) notes that historically, most measured Concentration Factor (CF) values during salt processing have been in the 12-14 range. However, recent processing gives CF values closer to 11. This observation does not indicate that the solvent performance is suffering, as the Decontamination Factor (DF) has still maintained consistently high values. Nevertheless, SRNL will continue to monitor for indications of process upsets. The bulk chemistry of the DSSHT and SEHT samples do not show any signs of unusual behavior.

  7. Model-Based Sampling and Inference

    U.S. Energy Information Administration (EIA) Indexed Site

    Model-Based Sampling, Inference and Imputation James R. Knaub, Jr., Energy Information Administration, EI-53.1 James.Knaub@eia.doe.gov Key Words: Survey statistics, Randomization, Conditionality, Random sampling, Cutoff sampling Abstract: Picking a sample through some randomization mechanism, such as random sampling within groups (stratified random sampling), or, say, sampling every fifth item (systematic random sampling), may be familiar to a lot of people. These are design-based samples.

  8. Enhancing the Benefit of the Chemical Mixture Methodology: A Report on Methodology Testing and Potential Approaches for Improving Performance

    SciTech Connect (OSTI)

    Yu, Xiao-Ying; Yao, Juan; He, Hua; Glantz, Clifford S.; Booth, Alexander E.

    2012-01-01

    Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.

  9. ANALYSIS OF THE TANK 5F FINAL CHARACTERIZATION SAMPLES-2011

    SciTech Connect (OSTI)

    Oji, L.; Diprete, D.; Coleman, C.; Hay, M.

    2012-08-03

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the

  10. ANALYSIS OF THE TANK 5F FINAL CHARATERIZATION SAMPLES-2011

    SciTech Connect (OSTI)

    Oji, L.; Diprete, D.; Coleman, C.; Hay, M.

    2012-01-20

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the

  11. Analysis Of The Tank 5F Final Characterization Samples-2011

    SciTech Connect (OSTI)

    Oji, L. N.; Diprete, D.; Coleman, C. J.; Hay, M. S.

    2012-09-27

    The Savannah River National Laboratory (SRNL) was requested by SRR to provide sample preparation and analysis of the Tank 5F final characterization samples to determine the residual tank inventory prior to grouting. Two types of samples were collected and delivered to SRNL: floor samples across the tank and subsurface samples from mounds near risers 1 and 5 of Tank 5F. These samples were taken from Tank 5F between January and March 2011. These samples from individual locations in the tank (nine floor samples and six mound Tank 5F samples) were each homogenized and combined in a given proportion into 3 distinct composite samples to mimic the average composition in the entire tank. These Tank 5F composite samples were analyzed for radiological, chemical and elemental components. Additional measurements performed on the Tank 5F composite samples include bulk density and water leaching of the solids to account for water soluble species. With analyses for certain challenging radionuclides as the exception, all composite Tank 5F samples were analyzed and reported in triplicate. The target detection limits for isotopes analyzed were based on customer desired detection limits as specified in the technical task request documents. SRNL developed new methodologies to meet these target detection limits and provide data for the extensive suite of components. While many of the target detection limits were met for the species characterized for Tank 5F, as specified in the technical task request, some were not met. In a few cases, the relatively high levels of radioactive species of the same element or a chemically similar element precluded the ability to measure some isotopes to low levels. The Technical Task Request allows that while the analyses of these isotopes is needed, meeting the detection limits for these isotopes is a lower priority than meeting detection limits for the other specified isotopes. The isotopes whose detection limits were not met in all cases included the

  12. Viscous-sludge sample collector

    DOE Patents [OSTI]

    Not Available

    1979-01-01

    A vertical core sample collection system for viscous sludge is disclosed. A sample tube's upper end has a flange and is attached to a piston. The tube and piston are located in the upper end of a bore in a housing. The bore's lower end leads outside the housing and has an inwardly extending rim. Compressed gas, from a storage cylinder, is quickly introduced into the bore's upper end to rapidly accelerate the piston and tube down the bore. The lower end of the tube has a high sludge entering velocity to obtain a full-length sludge sample without disturbing strata detail. The tube's downward motion is stopped when its upper end flange impacts against the bore's lower end inwardly extending rim.

  13. Computing the partition function, ensemble averages, and density of states for lattice spin systems by sampling the mean

    SciTech Connect (OSTI)

    Gillespie, Dirk

    2013-10-01

    An algorithm to approximately calculate the partition function (and subsequently ensemble averages) and density of states of lattice spin systems through non-Monte-Carlo random sampling is developed. This algorithm (called the sampling-the-mean algorithm) can be applied to models where the up or down spins at lattice nodes interact to change the spin states of other lattice nodes, especially non-Ising-like models with long-range interactions such as the biological model considered here. Because it is based on the Central Limit Theorem of probability, the sampling-the-mean algorithm also gives estimates of the error in the partition function, ensemble averages, and density of states. Easily implemented parallelization strategies and error minimizing sampling strategies are discussed. The sampling-the-mean method works especially well for relatively small systems, systems with a density of energy states that contains sharp spikes or oscillations, or systems with little a priori knowledge of the density of states.

  14. Water-Gas Sampling | Open Energy Information

    Open Energy Info (EERE)

    Water-Gas Sampling (Redirected from Water-Gas Samples) Redirect page Jump to: navigation, search REDIRECT Downhole Fluid Sampling Retrieved from "http:en.openei.orgw...

  15. Category:Water Sampling | Open Energy Information

    Open Energy Info (EERE)

    Water Sampling Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Geothermalpower.jpg Looking for the Water Sampling page? For detailed information on Water Sampling as...

  16. Volumetric apparatus for hydrogen adsorption and diffusion measurements: Sources of systematic error and impact of their experimental resolutions

    SciTech Connect (OSTI)

    Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; DeltaE s.r.l., c/o Universit della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende , Italy and CNR-IPCF LiCryL, c/o Universit della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende

    2013-10-15

    The development of a volumetric apparatus (also known as a Sieverts apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

  17. Inertial impaction air sampling device

    DOE Patents [OSTI]

    Dewhurst, K.H.

    1987-12-10

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  18. Inertial impaction air sampling device

    DOE Patents [OSTI]

    Dewhurst, K.H.

    1990-05-22

    An inertial impactor is designed which is to be used in an air sampling device for collection of respirable size particles in ambient air. The device may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry. 3 figs.

  19. Inertial impaction air sampling device

    DOE Patents [OSTI]

    Dewhurst, Katharine H.

    1990-01-01

    An inertial impactor to be used in an air sampling device for collection of respirable size particles in ambient air which may include a graphite furnace as the impaction substrate in a small-size, portable, direct analysis structure that gives immediate results and is totally self-contained allowing for remote and/or personal sampling. The graphite furnace collects suspended particles transported through the housing by means of the air flow system, and these particles may be analyzed for elements, quantitatively and qualitatively, by atomic absorption spectrophotometry.

  20. The Ocean Sampling Day Consortium

    SciTech Connect (OSTI)

    Kopf, Anna; Bicak, Mesude; Kottmann, Renzo; Schnetzer, Julia; Kostadinov, Ivaylo; Lehmann, Katja; Fernandez-Guerra, Antonio; Jeanthon, Christian; Rahav, Eyal; Ullrich, Matthias; Wichels, Antje; Gerdts, Gunnar; Polymenakou, Paraskevi; Kotoulas, Giorgos; Siam, Rania; Abdallah, Rehab Z.; Sonnenschein, Eva C.; Cariou, Thierry; O’Gara, Fergal; Jackson, Stephen; Orlic, Sandi; Steinke, Michael; Busch, Julia; Duarte, Bernardo; Caçador, Isabel; Canning-Clode, João; Bobrova, Oleksandra; Marteinsson, Viggo; Reynisson, Eyjolfur; Loureiro, Clara Magalhães; Luna, Gian Marco; Quero, Grazia Marina; Löscher, Carolin R.; Kremp, Anke; DeLorenzo, Marie E.; Øvreås, Lise; Tolman, Jennifer; LaRoche, Julie; Penna, Antonella; Frischer, Marc; Davis, Timothy; Katherine, Barker; Meyer, Christopher P.; Ramos, Sandra; Magalhães, Catarina; Jude-Lemeilleur, Florence; Aguirre-Macedo, Ma Leopoldina; Wang, Shiao; Poulton, Nicole; Jones, Scott; Collin, Rachel; Fuhrman, Jed A.; Conan, Pascal; Alonso, Cecilia; Stambler, Noga; Goodwin, Kelly; Yakimov, Michael M.; Baltar, Federico; Bodrossy, Levente; Van De Kamp, Jodie; Frampton, Dion M. F.; Ostrowski, Martin; Van Ruth, Paul; Malthouse, Paul; Claus, Simon; Deneudt, Klaas; Mortelmans, Jonas; Pitois, Sophie; Wallom, David; Salter, Ian; Costa, Rodrigo; Schroeder, Declan C.; Kandil, Mahrous M.; Amaral, Valentina; Biancalana, Florencia; Santana, Rafael; Pedrotti, Maria Luiza; Yoshida, Takashi; Ogata, Hiroyuki; Ingleton, Tim; Munnik, Kate; Rodriguez-Ezpeleta, Naiara; Berteaux-Lecellier, Veronique; Wecker, Patricia; Cancio, Ibon; Vaulot, Daniel; Bienhold, Christina; Ghazal, Hassan; Chaouni, Bouchra; Essayeh, Soumya; Ettamimi, Sara; Zaid, El Houcine; Boukhatem, Noureddine; Bouali, Abderrahim; Chahboune, Rajaa; Barrijal, Said; Timinouni, Mohammed; El Otmani, Fatima; Bennani, Mohamed; Mea, Marianna; Todorova, Nadezhda; Karamfilov, Ventzislav; ten Hoopen, Petra; Cochrane, Guy; L’Haridon, Stephane; Bizsel, Kemal Can; Vezzi, Alessandro; Lauro, Federico M.; Martin, Patrick; Jensen, Rachelle M.; Hinks, Jamie; Gebbels, Susan; Rosselli, Riccardo; De Pascale, Fabio; Schiavon, Riccardo; dos Santos, Antonina; Villar, Emilie; Pesant, Stéphane; Cataletto, Bruno; Malfatti, Francesca; Edirisinghe, Ranjith; Silveira, Jorge A. Herrera; Barbier, Michele; Turk, Valentina; Tinta, Tinkara; Fuller, Wayne J.; Salihoglu, Ilkay; Serakinci, Nedime; Ergoren, Mahmut Cerkez; Bresnan, Eileen; Iriberri, Juan; Nyhus, Paul Anders Fronth; Bente, Edvardsen; Karlsen, Hans Erik; Golyshin, Peter N.; Gasol, Josep M.; Moncheva, Snejana; Dzhembekova, Nina; Johnson, Zackary; Sinigalliano, Christopher David; Gidley, Maribeth Louise; Zingone, Adriana; Danovaro, Roberto; Tsiamis, George; Clark, Melody S.; Costa, Ana Cristina; El Bour, Monia; Martins, Ana M.; Collins, R. Eric; Ducluzeau, Anne-Lise; Martinez, Jonathan; Costello, Mark J.; Amaral-Zettler, Linda A.; Gilbert, Jack A.; Davies, Neil; Field, Dawn; Glöckner, Frank Oliver

    2015-06-19

    In this study, Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities and their embedded functional traits.