National Library of Energy BETA

Sample records for radiochemical relative error

  1. Table 1b. Relative Standard Errors for Effective, Occupied, and...

    Energy Information Administration (EIA) (indexed site)

    b.Relative Standard Errors Table 1b. Relative Standard Errors for Effective Occupied, and Vacant Square Footage, 1992 Building Characteristics All Buildings (thousand) Total...

  2. Table 2b. Relative Standard Errors for Electricity Consumption...

    Energy Information Administration (EIA) (indexed site)

    2b. Relative Standard Errors for Electricity Table 2b. Relative Standard Errors for Electricity Consumption and Electricity Intensities, per Square Foot, Specific to Occupied and...

  3. Table 6b. Relative Standard Errors for Total Electricity Consumption...

    Energy Information Administration (EIA) (indexed site)

    b. Relative Standard Errors for Total Electricity Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Electricity (thousand) Total...

  4. Monitoring and control of Urex radiochemical processes

    SciTech Connect

    Bryan, Samuel A.; Levitskaia, Tatiana G.

    2007-07-01

    There is urgent need for methods to provide on-line monitoring and control of the radiochemical processes that are currently being developed and demonstrated under the Global Nuclear Energy Partnership (GNEP) initiative. The methods used to monitor these processes must be robust (require little or no maintenance) and must be able to withstand harsh environments (e.g., high radiation fields and aggressive chemical matrices). The ability for continuous online monitoring allows the following benefits: - Accountability of the fissile materials; - Control of the process flowsheet; - Information on flow parameters, solution composition, and chemical speciation; - Enhanced performance by eliminating the need for traditional analytical 'grab samples'; - Improvement of operational and criticality safety; - Elimination of human error. The objective of our project is to use a system of flow, chemical composition, and physical property measurement techniques for developing on-line real-time monitoring systems for the UREX process streams. We will use our past experience in adapting and deploying Raman spectrometer combined with Coriolis meters and conductivity probes in developing a deployable prototype monitor for the UREX radiochemical streams. This system will be augmented with UV-vis-NIR spectrophotometer. Flow, temperature, density, and chemical composition and concentration measurements will be combined for real-time data analysis during processing. Currently emphasis of our research is placed on evaluation of the commercial instrumentation for the UREX flowsheet. (authors)

  5. Finite Bandwidth Related Errors in Noise Parameter Determination of PHEMTs

    SciTech Connect

    Wiatr, Wojciech

    2005-08-25

    We analyze errors in the determination of the four noise parameters due to finite measurement bandwidth and the delay time in the source circuit. The errors are especially large when characterizing low-noise microwave transistors at low microwave frequencies. They result from the spectral noise density variation across the measuring receiver band, due to resonant interaction of the highly mismatched transistor input with the source termination. We show also effects of virtual de-correlation of transistor's noise waves due to finite delay time at the input.

  6. Radiochemical Analysis Methodology for uranium Depletion Measurements

    SciTech Connect

    Scatena-Wachel DE

    2007-01-09

    This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

  7. RSE Table 7.4 Relative Standard Errors for Table 7.4

    Energy Information Administration (EIA) (indexed site)

    4 Relative Standard Errors for Table 7.4;" " Unit: Percents." " ",," "," ",," "," " "Economic",,"Residual","Distillate","Natural ","LPG and" "Characteristic(a)","Electricity","Fuel ...

  8. RSE Table 7.5 Relative Standard Errors for Table 7.5

    Energy Information Administration (EIA) (indexed site)

    5 Relative Standard Errors for Table 7.5;" " Unit: Percents." " ",," "," ",," "," " "Economic",,"Residual","Distillate","Natural ","LPG and" "Characteristic(a)","Electricity","Fuel ...

  9. Table 4b. Relative Standard Errors for Total Fuel Oil Consumption...

    Gasoline and Diesel Fuel Update

    4b. Relative Standard Errors for Total Fuel Oil Consumption per Effective Occupied Square Foot, 1992 Building Characteristics All Buildings Using Fuel Oil (thousand) Total Fuel Oil...

  10. Radiochemical technique for intensification of underexposed autoradiographs

    SciTech Connect

    Owunwanne, A.

    1984-04-01

    A radiochemical technique has been used to recover images of underexposed and developed autoradiographs. The underexposed image was radioactivated in a solution of (/sup 35/S)thiourea, air-dried, and reexposed to Kodak NMC film which was developed and processed in a Kodak X-Omat processor. Features which were not discernible in the underexposed autoradiographs were well distinguished in the intensified autoradiograph.

  11. Chemical and Radiochemical Analyses of Waste Isolation Pilot...

    Office of Environmental Management (EM)

    This document corresponds to Appendix C: Analysis Integrated Summary Report of the Technical Assessment Team Report. Chemical and Radiochemical Analyses of Waste Isolation Pilot ...

  12. RSE Table 8.2 Relative Standard Errors for Table 8.2

    Energy Information Administration (EIA) (indexed site)

    2 Relative Standard Errors for Table 8.2;" " Unit: Percents." " "," ",,"Computer Control of Building Wide Evironment(c)",,,"Computer Control of Processes or Major Energy-Using Equipment(d)",,,"Waste Heat Recovery",,,"Adjustable - Speed Motors",,,"Oxy - Fuel Firing" " "," " "NAICS"," " "Code(a)","Subsector and

  13. RSE Table 2.2 Relative Standard Errors for Table 2.2

    Energy Information Administration (EIA) (indexed site)

    2 Relative Standard Errors for Table 2.2;" " Unit: Percents." " "," "," "," "," "," "," "," "," "," ",," " " "," " "NAICS"," "," ","Residual","Distillate","Natural","LPG and",,"Coke"," " "Code(a)","Subsector and

  14. RSE Table 5.1 Relative Standard Errors for Table 5.1

    Energy Information Administration (EIA) (indexed site)

    1 Relative Standard Errors for Table 5.1;" " Unit: Percents." " "," " " "," "," ",," ","Distillate"," "," ",," " " "," ",,,,"Fuel Oil",,,"Coal" "NAICS"," "," ","Net","Residual","and","Natural ","LPG and","(excluding Coal"," "

  15. RSE Table 5.2 Relative Standard Errors for Table 5.2

    Energy Information Administration (EIA) (indexed site)

    2 Relative Standard Errors for Table 5.2;" " Unit: Percents." " "," "," ",," ","Distillate"," "," ",," " " "," ",,,,"Fuel Oil",,,"Coal" "NAICS"," "," ","Net","Residual","and","Natural ","LPG and","(excluding Coal"," " "Code(a)","End

  16. RSE Table 5.4 Relative Standard Errors for Table 5.4

    Energy Information Administration (EIA) (indexed site)

    4 Relative Standard Errors for Table 5.4;" " Unit: Percents." " "," ",," ","Distillate"," "," " " "," ","Net Demand",,"Fuel Oil",,,"Coal" "NAICS"," ","for ","Residual","and","Natural ","LPG and","(excluding Coal" "Code(a)","End Use","Electricity(b)","Fuel

  17. RSE Table 5.5 Relative Standard Errors for Table 5.5

    Energy Information Administration (EIA) (indexed site)

    5 Relative Standard Errors for Table 5.5;" " Unit: Percents." " "," ",," ",," "," ",," " " ",,,,"Distillate" " "," ",,,"Fuel Oil",,,"Coal"," " " ",,"Net","Residual","and","Natural","LPG and","(excluding Coal" "End Use","Total","Electricity(a)","Fuel

  18. RSE Table 5.6 Relative Standard Errors for Table 5.6

    Energy Information Administration (EIA) (indexed site)

    6 Relative Standard Errors for Table 5.6;" " Unit: Percents." " "," ",," ","Distillate"," "," ",," " " ",,,,"Fuel Oil",,,"Coal" " "," ","Net","Residual","and","Natural","LPG and","(excluding Coal"," " "End Use","Total","Electricity(a)","Fuel

  19. RSE Table 5.7 Relative Standard Errors for Table 5.7

    Energy Information Administration (EIA) (indexed site)

    7 Relative Standard Errors for Table 5.7;" " Unit: Percents." " ",,,"Distillate" " ","Net Demand",,"Fuel Oil",,,"Coal" " ","for ","Residual","and","Natural ","LPG and","(excluding Coal" "End Use","Electricity(a)","Fuel Oil","Diesel Fuel(b)","Gas(c)","NGL(d)","Coke and Breeze)"

  20. RSE Table 5.8 Relative Standard Errors for Table 5.8

    Energy Information Administration (EIA) (indexed site)

    8 Relative Standard Errors for Table 5.8;" " Unit: Percents." " ",," ","Distillate"," "," " " ","Net Demand",,"Fuel Oil",,,"Coal" " ","for ","Residual","and","Natural ","LPG and","(excluding Coal" "End Use","Electricity(a)","Fuel Oil","Diesel

  1. "RSE Table E7.2. Relative Standard Errors for Table E7.2;...

    Energy Information Administration (EIA) (indexed site)

    ... Error (RSE) percentage is provided" "for each table cell." "Operating ratios were calculated using the estimates of fuel consumption" "reported in Table N3.2." " Source: Energy ...

  2. Preliminary petrographic and radiochemical study of Kiev reservoir sediments

    SciTech Connect

    Neiheisel, J.; Dyer, R.S. )

    1992-01-01

    The Office of Radiation Programs, US Environmental Protection Agency, in cooperation with the Ukraine Ministry for Environmental Protection, is conducting investigations of the impact of Chernobyl radioactivity on the environment and the feasibility of treatability measures. One of the major considerations in this study is the Kiev Reservoir System and testing of methods applicable to treatment of drinking water. Studies of four sediment samples from the lower Kiev Reservoir, fractionated into several size fractions, using detailed petrographic and radiochemical methods, has provided preliminary data of radionuclide association with specific sediment composition and texture. Cesium 154 and 137 ranges from 0.65 to 8.71 pCi/g in the gravelly, silty, sand-sized sediment. The significant activity in the coarse fractions is limited to minor organic plant material (49.3 pCi/g radiocesium) and in the fine silt and clay-size, (containing illite), the activity of radiocesium ranges to 69.8 pCi/g. Thus, very low amounts of sediment volume with unique size and physical properties contain the bulk of radiocesium content. Preliminary studies of the uranium and plutonium isotopes in the sediment reveal overall low activity levels with most uranium association related to natural minerals.

  3. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    SciTech Connect

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  4. "RSE Table N7.1. Relative Standard Errors for Table N7.1;...

    Energy Information Administration (EIA) (indexed site)

    and Related Devices",6,3,4 335,"Electrical Equip., Appliances, and ... and Related Devices",11,5,5 335,"Electrical Equip., Appliances, and ...

  5. "RSE Table N13.3. Relative Standard Errors for Table N13.3;...

    Energy Information Administration (EIA) (indexed site)

    and Related Devices",0,0,0 335,"Electrical Equip., Appliances, and ... and Related Devices",0,0,0 335,"Electrical Equip., Appliances, and ...

  6. "RSE Table N13.1. Relative Standard Errors for Table N13.1;...

    Energy Information Administration (EIA) (indexed site)

    and Related Devices",3,0,16,0,3 335,"Electrical Equip., Appliances, and ... and Related Devices",6,0,0,0,6 335,"Electrical Equip., Appliances, and ...

  7. Radiochemical Mix Diagnostic in the Presence of Burn

    SciTech Connect

    Hayes, Anna C.

    2014-01-28

    There is a general interest in radiochemical probes of hydrodamicalmix in burning regions of NIF capsule. Here we provide estimates for the production of 13N from mixing of 10B ablator burning hotspot of a capsule. By comparing the 13N signal with x-ray measurements of the ablator mix into the hotspot it should be possible to estimate the chunkiness of this mix.

  8. I I Hydrological/Geological Studies Radiochemical Analyses of Water

    Office of Legacy Management (LM)

    ' Hydrological/Geological Studies Radiochemical Analyses of Water Samples from Selected Streams, Wells, Springs and Precipitation Collected Prior to Re-Entry . , Drilling, Project Rulison-6, 197 1 HGS 7 ' DISCLAIMER Portions of this document may be illegible in electronic image products. Images are produced from the best available original document. Prepared Under Agreement No. AT(29-2)-474 f o r the Nevada Operations Office U.S. Atomic Energy Commission PROPERTY OF U. S. GOVERNMENT -UNITED

  9. Radiochemical Analyses of Water Samples from Selected Streams

    Office of Legacy Management (LM)

    > : , - ' and Precipitation Collected in - Connection with Calibration-Test Flaring of Gas From Test Well, - I August 15-October 13, 197,0,, Project Rulison-8, 197 1 HGS 9 DISCLAIMER Portions of this document may be illegible in electronic image products. Images are produced from the best available original document. UNITED STATES DEPARTMENT OF THE INTERIOR GEOLOGICAL SURVEY Federal center, Denver, Colorado 80225 RADIOCHEMICAL ANALYSES OF WATER SAMPLES FROM SELECTED STREAMS AND PRECIPITATION

  10. RSE Table 2.1 Relative Standard Errors for Table 2.1

    Energy Information Administration (EIA) (indexed site)

    ... Related Devices",4,0,0,10,0,0,0,4 335,"Electrical Equip., Appliances, and ... Related Devices",8,0,0,43,0,0,0,2 335,"Electrical Equip., Appliances, and ...

  11. "RSE Table C1.1. Relative Standard Errors for Table C1.1;"

    Energy Information Administration (EIA) (indexed site)

    .1. Relative Standard Errors for Table C1.1;" " Unit: Percents." " "," "," "," "," "," "," "," "," "," "," " " "," ","Any",," "," ",," "," ",," ","Shipments" "NAICS"," ","Energy","Net","Residual","Distillate",,"LPG

  12. "RSE Table C10.2. Relative Standard Errors for Table C10.2;"

    Energy Information Administration (EIA) (indexed site)

    2. Relative Standard Errors for Table C10.2;" " Unit: Percents." ,,,"Establishments" " "," ",,"with Any"," Steam Turbines","Supplied","by Either","Conventional","Combustion","Turbines"," "," "," ","Internal","Combustion","Engines"," Steam Turbines","Supplied","by Heat",," " "

  13. "RSE Table C10.3. Relative Standard Errors for Table C10.3;"

    Energy Information Administration (EIA) (indexed site)

    3. Relative Standard Errors for Table C10.3;" " Unit: Percents." "NAICS"," " "Code(a)","Industry-Specific Technology","In Use(b)","Not in Use","Don't Know" ,,"Total United States" , 311,"FOOD" ," Infrared Heating",3,1,2 ," Microwave Drying",5,1,3 ," Closed-Cycle Heat Pump System Used to Recover Heat",7,1,3 ," Open-Cycle Heat Pump System Used to Produce

  14. "RSE Table C9.1. Relative Standard Errors for Table C9.1;"

    Energy Information Administration (EIA) (indexed site)

    C9.1. Relative Standard Errors for Table C9.1;" " Unit: Percents." " "," "," " " "," ",,,"General","Amount of ","Establishment-Paid","Activity Cost" "NAICS"," "," " "Code(a)","Energy-Management Activity","No Participation","Participation(b)","All","Some","None","Don't Know"

  15. "RSE Table N1.3. Relative Standard Errors for Table N1.3;"

    Energy Information Administration (EIA) (indexed site)

    .3. Relative Standard Errors for Table N1.3;" " Unit: Percents." " "," " ,"Total" "Energy Source","First Use" ,"Total United States" "Coal ",3 "Natural Gas",1 "Net Electricity",1 " Purchases",1 " Transfers In",9 " Onsite Generation from Noncombustible Renewable Energy",15 " Sales and Transfers Offsite",3 "Coke and Breeze",2 "Residual Fuel

  16. Guiding Principles for Sustainable Existing Buildings: Radiochemical Processing Laboratory

    SciTech Connect

    Pope, Jason E.

    2013-11-11

    In 2006, the United States (U.S.) Department of Energy (DOE) signed the Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding (MOU), along with 21 other agencies. Pacific Northwest National Laboratory (PNNL) is exceeding this requirement and, currently, about 25 percent of its buildings are High Performance and Sustainable Buildings. The pages that follow document the Guiding Principles conformance effort for the Radiochemical Processing Laboratory (RPL) at PNNL. The RPL effort is part of continued progress toward a building inventory that is 100 percent compliant with the Guiding Principles.

  17. "RSE Table N5.2. Relative Standard Errors for Table N5.2;"

    Energy Information Administration (EIA) (indexed site)

    2. Relative Standard Errors for Table N5.2;" " Unit: Percents." ,,"S e l e c t e d","W o o d","a n d","W o o d -","R e l a t e d","P r o d u c t s" ,,,,,"B i o m a s s" ,,,,,,"Wood Residues" ,,,,,,"and","Wood-Related" " "," ","Pulping Liquor"," "," ","Wood","Byproducts","and",," "

  18. Facility Effluent Monitoring Plan for the 325 Radiochemical Processing Laboratory

    SciTech Connect

    Shields, K.D.; Ballinger, M.Y.

    1999-04-02

    This Facility Effluent Monitoring Plan (FEMP) has been prepared for the 325 Building Radiochemical Processing Laboratory (RPL) at the Pacific Northwest National Laboratory (PNNL) to meet the requirements in DOE Order 5400.1, ''General Environmental Protection Programs.'' This FEMP has been prepared for the RPL primarily because it has a ''major'' (potential to emit >0.1 mrem/yr) emission point for radionuclide air emissions according to the annual National Emission Standards for Hazardous Air Pollutants (NESHAP) assessment performed. This section summarizes the airborne and liquid effluents and the inventory based NESHAP assessment for the facility. The complete monitoring plan includes characterization of effluent streams, monitoring/sampling design criteria, a description of the monitoring systems and sample analysis, and quality assurance requirements. The RPL at PNNL houses radiochemistry research, radioanalytical service, radiochemical process development, and hazardous and radioactive mixed waste treatment activities. The laboratories and specialized facilities enable work ranging from that with nonradioactive materials to work with picogram to kilogram quantities of fissionable materials and up to megacurie quantities of other radionuclides. The special facilities within the building include two shielded hot-cell areas that provide for process development or analytical chemistry work with highly radioactive materials and a waste treatment facility for processing hazardous, mixed radioactive, low-level radioactive, and transuranic wastes generated by PNNL activities.

  19. Error abstractions

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Error and fault abstractions Mattan Erez UT Austin *Who should care about faults and errors? *Ideally, only system cares about masked faults? - Assuming application bugs are not...

  20. Handling of Ammonium Nitrate Mother-Liquid Radiochemical Production - 13089

    SciTech Connect

    Zherebtsov, Alexander; Dvoeglazov, Konstantine; Volk, Vladimir; Zagumenov, Vladimir; Zverev, Dmitriy; Tinin, Vasiliy; Kozyrev, Anatoly; Shamin, Dladimir; Tvilenev, Konstantin

    2013-07-01

    The aim of the work is to develop a basic technology of decomposition of ammonium nitrate stock solutions produced in radiochemical enterprises engaged in the reprocessing of irradiated nuclear fuel and fabrication of fresh fuel. It was necessary to work out how to conduct a one-step thermal decomposition of ammonium nitrate, select and test the catalysts for this process and to prepare proposals for recycling condensation. Necessary accessories were added to a laboratory equipment installation decomposition of ammonium nitrate. It is tested several types of reducing agents and two types of catalyst to neutralize the nitrogen oxides. It is conducted testing of modes of the process to produce condensation, suitable for use in the conversion of a new technological scheme of production. It is studied the structure of the catalysts before and after their use in a laboratory setting. It is tested the selected catalyst in the optimal range for 48 hours of continuous operation. (authors)

  1. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  2. Error Page

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    writes out the header html. We are sorry to report that an error has occurred. Internal identifier for doc type not found. Return to RevCom | Return to Web Portal Need help? Email...

  3. RSE Table N8.1 and N8.2. Relative Standard Errors for Tables N8.1 and N8.2

    Energy Information Administration (EIA) (indexed site)

    1 and N8.2. Relative Standard Errors for Tables N8.1 and N8.2;" " Unit: Percents." ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"Selected","Wood and Other","Biomass","Components" ,,,,,,,"Coal Components",,,"Coke",,"Electricity","Components",,,,,,,,,,,,,"Natural Gas","Components",,"Steam","Components" ,,,,,,,,,,,,,,"Total",,,,,,,,,,,,,,,,,,,,,,,"Wood

  4. EIA - Sorry! Unexpected Error

    Energy Information Administration (EIA) (indexed site)

    Cold Fusion Error Unexpected Error Sorry An error was encountered. This error could be due to scheduled maintenance. Information about the error has been routed to the appropriate...

  5. Systematic Errors

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Systematic Errors of MiniBooNE K. B. M. Mahn, for the MiniBooNE collaboration Physics Department, Mail Code 9307, Columbia University, New York, NY 10027, USA Abstract. Modern neutrino oscillation experiments use a 'near to far' ratio to observe oscillation; many systematic errors cancel in a ratio between the near detector's unoscillated event sample and the far detector's oscillated one. Similarly, MiniBooNE uses a ν e to ν µ ratio, which reduces any common uncertainty in both samples.

  6. RAPID RADIOCHEMICAL ANALYSES IN SUPPORT OF FUKUSHIMA NUCLEAR ACCIDENT

    SciTech Connect

    Maxwell, S.

    2012-11-07

    reported within twenty-four (24) hours of receipt using rapid techniques published previously. The rapid reporting of high quality analytical data arranged through the U.S. Department of Energy Consequence Management Home Team was critical to allow the government of Japan to readily evaluate radiological impacts from the nuclear reactor incident to both personnel and the environment. SRNL employed unique rapid methods capability for radionuclides to support Japan that can also be applied to environmental, bioassay and waste management samples. New rapid radiochemical techniques for radionuclides in soil and other environmental matrices as well as some of the unique challenges associated with this work will be presented that can be used for application to environmental monitoring, environmental remediation, decommissioning and decontamination activities.

  7. Rapid Radiochemical Analyses in Support of Fukushima Nuclear Accident - 13196

    SciTech Connect

    Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.

    2013-07-01

    discussed. Air filter samples were reported within twenty-four (24) hours of receipt using rapid techniques published previously. [11] The rapid reporting of high quality analytical data arranged through the U.S. Department of Energy Consequence Management Home Team was critical to allow the government of Japan to readily evaluate radiological impacts from the nuclear reactor incident to both personnel and the environment. SRNL employed unique rapid methods capability for radionuclides to support Japan that can also be applied to environmental, bioassay and waste management samples. New rapid radiochemical techniques for radionuclides in soil and other environmental matrices as well as some of the unique challenges associated with this work will be presented that can be used for application to environmental monitoring, environmental remediation, decommissioning and decontamination activities. (authors)

  8. RSE Table E6.1 and E6.2. Relative Standard Errors for Tables E6.1 and E6.2

    Energy Information Administration (EIA) (indexed site)

    E6.1 and E6.2. Relative Standard Errors for Tables E6.1 and E6.2;" " Unit: Percents." " "," ",," ","Distillate"," "," ",," " " ",,,,"Fuel Oil",,,"Coal" " "," ","Net","Residual","and",,"LPG and","(excluding Coal"," " "End Use","Total","Electricity(a)","Fuel

  9. RSE Table N2.1 and N2.2. Relative Standard Errors for Tables N2.1 and N2.2

    Energy Information Administration (EIA) (indexed site)

    N2.1 and N2.2. Relative Standard Errors for Tables N2.1 and N2.2;" " Unit: Percents." " "," " "NAICS"," "," ","Residual","Distillate",,"LPG and",,"Coke"," " "Code(a)","Subsector and Industry","Total","Fuel Oil","Fuel Oil(b)","Natural Gas(c)","NGL(d)","Coal","and Breeze","Other(e)"

  10. RSE Table N4.1 and N4.2. Relative Standard Errors for Tables N4.1 and N4.2

    Energy Information Administration (EIA) (indexed site)

    N4.1 and N4.2. Relative Standard Errors for Tables N4.1 and N4.2;" " Unit: Percents." " "," "," ",," "," "," "," "," "," "," ",," " "NAICS"," "," ",,"Residual","Distillate",,"LPG and",,"Coke"," " "Code(a)","Subsector and

  11. RSE Table N6.1 and N6.2. Relative Standard Errors for Tables N6.1 and N6.2

    Energy Information Administration (EIA) (indexed site)

    1 and N6.2. Relative Standard Errors for Tables N6.1 and N6.2;" " Unit: Percents." " "," "," ",," ","Distillate"," "," ",," " " "," ",,,,"Fuel Oil",,,"Coal" "NAICS"," "," ","Net","Residual","and",,"LPG and","(excluding Coal"," " "Code(a)","End

  12. RSE Table N6.3 and N6.4. Relative Standard Errors for Tables N6.3 and N6.4

    Energy Information Administration (EIA) (indexed site)

    3 and N6.4. Relative Standard Errors for Tables N6.3 and N6.4;" " Unit: Percents." " "," ",," ","Distillate"," "," " " "," ",,,"Fuel Oil",,,"Coal" "NAICS"," ","Net Demand","Residual","and",,"LPG and","(excluding Coal" "Code(a)","End Use","for Electricity(b)","Fuel

  13. RSE Table S1.1 and S1.2. Relative Standard Errors for Tables S1.1 and S1.2

    Energy Information Administration (EIA) (indexed site)

    S1.1 and S1.2. Relative Standard Errors for Tables S1.1 and S1.2;" " Unit: Percents." " "," "," "," "," "," "," "," "," "," "," " " "," "," ",," "," ",," "," ",," ","Shipments" "SIC"," ",,"Net","Residual","Distillate",,"LPG

  14. RSE Table S2.1 and S2.2. Relative Standard Errors for Tables S2.1 and S2.2

    Energy Information Administration (EIA) (indexed site)

    S2.1 and S2.2. Relative Standard Errors for Tables S2.1 and S2.2;" " Unit: Percents." " "," "," ",," "," "," "," "," "," ",," " "SIC"," "," ","Residual","Distillate",,"LPG and",,"Coke"," " "Code(a)","Major Group and Industry","Total","Fuel Oil","Fuel

  15. RSE Table S3.1 and S3.2. Relative Standard Errors for Tables S3.1 and S3.2

    Energy Information Administration (EIA) (indexed site)

    S3.1 and S3.2. Relative Standard Errors for Tables S3.1 and S3.2;" " Unit: Percents." " "," " "SIC"," "," ","Net","Residual","Distillate",,"LPG and",,"Coke"," " "Code(a)","Major Group and Industry","Total","Electricity(b)","Fuel Oil","Fuel Oil(c)","Natural

  16. Enterprise Assessments Targeted Review of the Safety-Significant Systems at the Pacific Northwest National Laboratory Radiochemical Processing Laboratory – July 2015

    Energy.gov [DOE]

    Targeted Review of the Safety-Significant Systems at the Pacific Northwest National Laboratory Radiochemical Processing Laboratory

  17. Literature search, review, and compilation of data for chemical and radiochemical sensors: Task 1 report

    SciTech Connect

    1993-01-01

    During the next several decades, the US Department of Energy is expected to spend tens of billions of dollars in the characterization, cleanup, and monitoring of DOE`s current and former installations that have various degrees of soil and groundwater contamination made up of both hazardous and mixed wastes. Each of these phases will require site surveys to determine type and quantity of hazardous and mixed wastes. It is generally recognized that these required survey and monitoring efforts cannot be performed using traditional chemistry methods based on laboratory evaluation of samples from the field. For that reason, a tremendous push during the past decade or so has been made on research and development of sensors. This report contains the results of an extensive literature search on sensors that are used or have applicability in environmental and waste management. While restricting the search to a relatively small part of the total chemistry spectrum, a sizable body of reference material is included. Results are presented in tabular form for general references obtained from data base searches, as narrative reviews of relevant chapters from proceedings, as book reviews, and as reviews of journal articles with particular relevance to the review. Four broad sensor types are covered: electrochemical processes, piezoelectric devices, fiber optics, and radiochemical processes. The topics of surface chemistry processes and biosensors are not treated separately because they often are an adjunct to one of the four sensors listed. About 1,000 tabular entries are listed, including selected journal articles, reviews of conference/meeting proceedings, and books. Literature to about mid-1992 is covered.

  18. Statistical analysis of radiochemical measurements of TRU radionuclides in REDC waste

    SciTech Connect

    Beauchamp, J.; Downing, D.; Chapman, J.; Fedorov, V.; Nguyen, L.; Parks, C.; Schultz, F.; Yong, L.

    1996-10-01

    This report summarizes results of the study on the isotopic ratios of transuranium elements in waste from the Radiochemical Engineering Development Center actinide-processing streams. The knowledge of the isotopic ratios when combined with results of nondestructive assays, in particular with results of Active-Passive Neutron Examination Assay and Gamma Active Segmented Passive Assay, may lead to significant increase in precision of the determination of TRU elements contained in ORNL generated waste streams.

  19. Errors of Nonobservation

    Energy Information Administration (EIA) (indexed site)

    Errors of Nonobservation Finally, several potential sources of nonsampling error and bias result from errors of nonobservation. The 1994 MECS represents, in terms of sampling...

  20. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    SciTech Connect

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  1. Energy and Water Conservation Assessment of the Radiochemical Processing Laboratory (RPL) at Pacific Northwest National Laboratory

    SciTech Connect

    Johnson, Stephanie R.; Koehler, Theresa M.; Boyd, Brian K.

    2014-05-31

    This report summarizes the results of an energy and water conservation assessment of the Radiochemical Processing Laboratory (RPL) at Pacific Northwest National Laboratory (PNNL). The assessment was performed in October 2013 by engineers from the PNNL Building Performance Team with the support of the dedicated RPL staff and several Facilities and Operations (F&O) department engineers. The assessment was completed for the Facilities and Operations (F&O) department at PNNL in support of the requirements within Section 432 of the Energy Independence and Security Act (EISA) of 2007.

  2. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    SciTech Connect

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Orton, Robert D.; Rapko, Brian M.; Smart, John E.

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  3. Spectroscopic Online Monitoring for Process Control and Safeguarding of Radiochemical Fuel Reprocessing Streams

    SciTech Connect

    Bryan, Samuel A.; Levitskaia, Tatiana G.; Casella, Amanda J.; Peterson, James M.

    2013-02-24

    There is a renewed interest worldwide to promote the use of nuclear power and close the nuclear fuel cycle. The long term successful use of nuclear power is critically dependent upon adequate and safe processing and disposition of the spent nuclear fuel. Liquid-liquid extraction is a separation technique commonly employed for the processing of the dissolved spent nuclear fuel. The instrumentation used to monitor these processes must be robust, require little or no maintenance, and be able to withstand harsh environments such as high radiation fields and aggressive chemical matrices. In addition, the ability for continuous online monitoring allows for numerous benefits. This paper reviews application of the absorption and vibrational spectroscopic techniques supplemented by physicochemical measurements for radiochemical process monitoring. In this context, our team experimentally assessed the potential of Raman and spectrophotometric techniques for on-line real-time monitoring of the U(VI)/nitrate ion/nitric acid and Pu(IV)/Np(V)/Nd(III), respectively, in solutions relevant to spent fuel reprocessing. Both techniques demonstrated robust performance in the repetitive batch measurements of each analyte in a wide concentration range using simulant and commercial dissolved spent fuel solutions. Static spectroscopic measurements served as training sets for the multivariate data analysis to obtain partial least squares predictive models, which were validated using on-line centrifugal contactor extraction tests. Satisfactory prediction of the analytes concentrations in these preliminary experiments warrants further development of the spectroscopy-based methods for radiochemical safeguards and process control.

  4. Spectroscopic online monitoring for process control and safeguarding of radiochemical streams

    SciTech Connect

    Bryan, S.A.; Levitskaia, T.G.

    2013-07-01

    This paper summarizes application of the absorption and vibrational spectroscopic techniques supplemented by physicochemical measurements for radiochemical process monitoring. In this context, our team experimentally assessed the potential of Raman and spectrophotometric techniques for online real-time monitoring of the U(VI)/nitrate ion/nitric acid and Pu(IV)/Np(V)/Nd(III), respectively, in solutions relevant to spent fuel reprocessing. These techniques demonstrate robust performance in the repetitive batch measurements of each analyte in a wide concentration range using simulant and commercial dissolved spent fuel solutions. Spectroscopic measurements served as training sets for the multivariate data analysis to obtain partial least squares predictive models, which were validated using on-line centrifugal contactor extraction tests. Satisfactory prediction of the analytes concentrations in these preliminary experiments warrants further development of the spectroscopy-based methods for radiochemical process control and safeguarding. Additionally, the ability to identify material intentionally diverted from a liquid-liquid extraction contactor system was successfully tested using on-line process monitoring as a means to detect the amount of material diverted. (authors)

  5. Spectroscopic Online Monitoring for Process Control and Safeguarding of Radiochemical Fuel Reprocessing Streams - 13553

    SciTech Connect

    Bryan, S.A.; Levitskaia, T.G.; Casella, Amanda; Peterson, James

    2013-07-01

    There is a renewed interest worldwide to promote the use of nuclear power and close the nuclear fuel cycle. The long term successful use of nuclear power is critically dependent upon adequate and safe processing and disposition of the used nuclear fuel. Liquid-liquid extraction is a separation technique commonly employed for the processing of the dissolved spent nuclear fuel. The instrumentation used to monitor these processes must be robust, require little or no maintenance, and be able to withstand harsh environments such as high radiation fields and aggressive chemical matrices. This paper discusses application of absorption and vibrational spectroscopic techniques supplemented by physicochemical measurements for radiochemical process monitoring. In this context, our team experimentally assessed the potential of Raman and spectrophotometric techniques for on-line real-time monitoring of the U(VI)/nitrate ion/nitric acid and Pu(IV)/Np(V)/Nd(III), respectively, in solutions relevant to spent fuel reprocessing. Both techniques demonstrated robust performance in the repetitive batch measurements of each analyte in a wide concentration range using simulant and commercial dissolved spent fuel solutions. Static spectroscopic measurements served as training sets for the multivariate data analysis to obtain partial least squares predictive models, which were validated using on-line centrifugal contactor extraction tests. Satisfactory prediction of the analytes concentrations in these preliminary experiments warrants further development of the spectroscopy-based methods for radiochemical safeguards and process control. (authors)

  6. Radiochemical procedures for analysis of Pu, Am, Cs and Sr in water, soil, sediments and biota samples

    SciTech Connect

    Wong, K.M.; Jokela, T.A.; Noshkin, V.E.

    1994-02-01

    The Environmental Radioactivity Analysis Laboratory (ERAL) was established as an analytical facility. The primary function of ERAL is to provide fast and accurate radiological data of environmental samples. Over the years, many radiochemical procedures have been developed by the staffs of ERAL. As result, we have found that our procedures exist in many different formats and in many different notebooks, documents and files. Therefore, in order to provide for more complete and orderly documentation of the radiochemical procedures that are being used by ERAL, we have decided to standardize the format and compile them into a series of reports. This first report covers procedures we have developed and are using for the radiochemical analysis of Pu, Am, Cs, and Sr in various matrices. Additional analytical procedures and/or revisions for other elements will be reported as they become available through continuation of these compilation efforts.

  7. Analysis of 161Tb by radiochemical separation and liquid scintillation counting

    SciTech Connect

    Jiang, J.; Davies, A.; Arrigo, L.; Friese, J.; Seiner, B. N.; Greenwood, L.; Finch, Z.

    2015-12-05

    The determination of 161Tb activity is problematic due to its very low fission yield, short half-life, and the complication of its gamma spectrum. At AWE, radiochemically purified 161Tb solution was measured on a PerkinElmer 1220 QuantulusTM Liquid Scintillation Spectrometer. Since there was no 161Tb certified standard solution available commercially, the counting efficiency was determined by the CIEMAT/NIST Efficiency Tracing method. The method was validated during a recent inter-laboratory comparison exercise involving the analysis of a uranium sample irradiated with thermal neutrons. Lastly, the measured 161Tb result was in excellent agreement with the result using gamma spectrometry and the result obtained by Pacific Northwest National Laboratory.

  8. Analysis of 161Tb by radiochemical separation and liquid scintillation counting

    DOE PAGES [OSTI]

    Jiang, J.; Davies, A.; Arrigo, L.; Friese, J.; Seiner, B. N.; Greenwood, L.; Finch, Z.

    2015-12-05

    The determination of 161Tb activity is problematic due to its very low fission yield, short half-life, and the complication of its gamma spectrum. At AWE, radiochemically purified 161Tb solution was measured on a PerkinElmer 1220 QuantulusTM Liquid Scintillation Spectrometer. Since there was no 161Tb certified standard solution available commercially, the counting efficiency was determined by the CIEMAT/NIST Efficiency Tracing method. The method was validated during a recent inter-laboratory comparison exercise involving the analysis of a uranium sample irradiated with thermal neutrons. Lastly, the measured 161Tb result was in excellent agreement with the result using gamma spectrometry and the result obtainedmore » by Pacific Northwest National Laboratory.« less

  9. Project Title: Radiochemical Analysis by High Sensitivity Dual-Optic Micro X-ray Fluorescence

    SciTech Connect

    Havrilla, George J.; Gao, Ning

    2002-06-01

    A novel dual-optic micro X-ray fluorescence instrument will be developed to do radiochemical analysis of high-level radioactive wastes at DOE sites such as Savannah River Site and Hanford. This concept incorporates new X-ray optical elements such as monolithic polycapillaries and double bent crystals, which focus X-rays. The polycapillary optic can be used to focus X-rays emitted by the X-ray tube thereby increasing the X-ray flux on the sample over 1000 times. Polycapillaries will also be used to collect the X-rays from the excitation site and screen the radiation background from the radioactive species in the specimen. This dual-optic approach significantly reduces the background and increases the analyte signal thereby increasing the sensitivity of the analysis. A doubly bent crystal used as the focusing optic produces focused monochromatic X-ray excitation, which eliminates the bremsstrahlung background from the X-ray source. The coupling of the doubly bent crystal for monochromatic excitation with a polycapillary for signal collection can effectively eliminate the noise background and radiation background from the specimen. The integration of these X-ray optics increases the signal-to-noise and thereby increases the sensitivity of the analysis for low-level analytes. This work will address a key need for radiochemical analysis of high-level waste using a non-destructive, multi-element, and rapid method in a radiation environment. There is significant potential that this instrumentation could be capable of on-line analysis for process waste stream characterization at DOE sites.

  10. The Radiochemical Analysis of Gaseous Samples (RAGS) Apparatus for Nuclear Diagnostics at the National Ignition Facility

    SciTech Connect

    Shaughnessy, D A; Velsko, C A; Jedlovec, D R; Yeamans, C B; Moody, K J; Tereshatov, E; Stoeffl, W; Riddle, A

    2012-05-11

    The RAGS (Radiochemical Analysis of Gaseous Samples) diagnostic apparatus was recently installed at the National Ignition Facility. Following a NIF shot, RAGS is used to pump the gas load from the NIF chamber for purification and isolation of the noble gases. After collection, the activated gaseous species are counted via gamma spectroscopy for measurement of the capsule areal density and fuel-ablator mix. Collection efficiency was determined by injecting a known amount of {sup 135}Xe into the NIF chamber, which was then collected with RAGS. Commissioning was performed with an exploding pusher capsule filled with isotopically enriched {sup 124}Xe and {sup 126}Xe added to the DT gas fill. Activated xenon species were recovered post-shot and counted via gamma spectroscopy. Results from the collection and commissioning tests are presented. The performance of RAGS allows us to establish a noble gas collection method for measurement of noble gas species produced via neutron and charged particle reactions in a NIF capsule.

  11. Radiochemical Assays of Irradiated VVER-440 Fuel for Use in Spent Fuel Burnup Credit Activities

    SciTech Connect

    Jardine, L J

    2005-04-25

    The objective of this spent fuel burnup credit work was to study and describe a VVER-440 reactor spent fuel assembly (FA) initial state before irradiation, its operational irradiation history and the resulting radionuclide distribution in the fuel assembly after irradiation. This work includes the following stages: (1) to pick out and select a specific spent (irradiated) FA for examination; (2) to describe the FA initial state before irradiation; (3) to describe the irradiation history, including thermal calculations; (4) to examine the burnup distribution of select radionuclides along the FA height and cross-section; (5) to examine the radionuclide distributions; (6) to determine the Kr-85 release into the plenum; (7) to select and prepare FA rod specimens for destructive examinations; (8) to determine the radionuclide compositions, isotope masses and burnup in the rod specimens; and (9) to analyze, document and process the results. The specific workscope included the destructive assay (DA) of spent fuel assembly rod segments with an {approx}38.5 MWd/KgU burnup from a single VVER-440 fuel assembly from the Novovorenezh reactor in Russia. Based on irradiation history criteria, four rods from the fuel assembly were selected and removed from the assembly for examination. Next, 8 sections were cut from the four rods and sent for destructive analysis of radionuclides by radiochemical analyses. The results were documented in a series of seven reports over a period of {approx}1 1/2 years.

  12. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  13. Trouble Shooting and Error Messages

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    ... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. nemgnierrorhandler(): a transaction error ...

  14. Monitoring, Controlling and Safeguarding Radiochemical Streams at Spent Fuel Reprocessing Facilities, Part 1: Optical Spectroscopic Methods

    SciTech Connect

    Bryan, Samuel A.; Levitskaia, Tatiana G.; Schwantes, Jon M.; Orton, Christopher R.; Peterson, James M.; Casella, Amanda J.

    2012-02-07

    Abstract: The International Atomic Energy Agency (IAEA) has established international safeguards standards for fissionable material at spent fuel reprocessing plants to ensure that significant quantities of weapons-useable nuclear material are not diverted from these facilities. For large throughput nuclear facilities, it is difficult to satisfy the IAEA safeguards accountancy goal for detection of abrupt diversion. Currently, methods to verify material control and accountancy (MC&A) at these facilities require time-consuming and resource-intensive destructive assay (DA). Leveraging new on-line non-destructive assay (NDA) process monitoring techniques in conjunction with the traditional and highly precise DA methods may provide an additional measure to nuclear material accountancy which would potentially result in a more timely, cost-effective and resource efficient means for safeguards verification at such facilities. By monitoring process control measurements (e.g. flowrates, temperatures, or concentrations of reagents, products or wastes), abnormal plant operations can be detected. Pacific Northwest National Laboratory (PNNL) is developing on-line NDA process monitoring technologies based upon gamma-ray and optical spectroscopic measurements to potentially reduce the time and resource burden associated with current techniques. The Multi-Isotope Process (MIP) Monitor uses gamma spectroscopy and multivariate analysis to identify off-normal conditions in process streams. The spectroscopic monitor continuously measures chemical compositions of the process streams including actinide metal ions (U, Pu, Np), selected fission products, and major stable flowsheet reagents using UV-Vis, Near IR and Raman spectroscopy. Multi-variate analysis is also applied to the optical measurements in order to quantify concentrations of analytes of interest within a complex array of radiochemical streams. This paper will provide an overview of these methods and reports on-going efforts

  15. Radiochemically-Supported Microbial Communities: A Potential Mechanism for Biocolloid Production of Importance to Actinide Transport

    SciTech Connect

    Moser, Duane P; Hamilton-Brehm, Scott D; Fisher, Jenny C; Bruckner, James C; Kruger, Brittany; Sackett, Joshua; Russell, Charles E; Onstott, Tullis C; Czerwinski, Ken; Zavarin, Mavrik; Campbell, James H

    2014-06-01

    Due to the legacy of Cold War nuclear weapons testing, the Nevada National Security Site (NNSS, formerly known as the Nevada Test Site (NTS)) contains millions of Curies of radioactive contamination. Presented here is a summary of the results of the first comprehensive study of subsurface microbial communities of radioactive and nonradioactive aquifers at this site. To achieve the objectives of this project, cooperative actions between the Desert Research Institute (DRI), the Nevada Field Office of the National Nuclear Security Administration (NNSA), the Underground Test Area Activity (UGTA), and contractors such as Navarro-Interra (NI), were required. Ultimately, fluids from 17 boreholes and two water-filled tunnels were sampled (sometimes on multiple occasions and from multiple depths) from the NNSS, the adjacent Nevada Test and Training Range (NTTR), and a reference hole in the Amargosa Valley near Death Valley. The sites sampled ranged from highly-radioactive nuclear device test cavities to uncontaminated perched and regional aquifers. Specific areas sampled included recharge, intermediate, and discharge zones of a 100,000-km2 internally-draining province, known as the Death Valley Regional Flow System (DVRFS), which encompasses the entirety of the NNSS/NTTR and surrounding areas. Specific geological features sampled included: West Pahute and Ranier Mesas (recharge zone), Yucca and Frenchman Flats (transitional zone), and the Western edge of the Amargosa Valley near Death Valley (discharge zone). The original overarching question underlying the proposal supporting this work was stated as: Can radiochemically-produced substrates support indigenous microbial communities and subsequently stimulate biocolloid formation that can affect radionuclides in NNSS subsurface nuclear test/detonation sites? Radioactive and non-radioactive groundwater samples were thus characterized for physical parameters, aqueous geochemistry, and microbial communities using both DNA- and

  16. runtime error message: "readControlMsg: System returned error...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd"...

  17. Trouble Shooting and Error Messages

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    ... Check the error code of your application. error obtaining user credentials system Resubmit. Contact consultants for repeated problems. NERSC and Cray are working on this issue. ...

  18. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  19. Note: Radiochemical measurement of fuel and ablator areal densities in cryogenic implosions at the National Ignition Facility

    SciTech Connect

    Hagmann, C. Shaughnessy, D. A.; Moody, K. J.; Grant, P. M.; Gharibyan, N.; Gostic, J. M.; Wooddy, P. T.; Torretto, P. C.; Bandong, B. B.; Bionta, R.; Cerjan, C. J.; Bernstein, L. A.; Caggiano, J. A.; Sayre, D. B.; Schneider, D. H.; Henry, E. A.; Fortner, R. J.; Herrmann, H. W.; Knauer, J. P.

    2015-07-15

    A new radiochemical method for determining deuterium-tritium (DT) fuel and plastic ablator (CH) areal densities (ρR) in high-convergence, cryogenic inertial confinement fusion implosions at the National Ignition Facility is described. It is based on measuring the {sup 198}Au/{sup 196}Au activation ratio using the collected post-shot debris of the Au hohlraum. The Au ratio combined with the independently measured neutron down scatter ratio uniquely determines the areal densities ρR(DT) and ρR(CH) during burn in the context of a simple 1-dimensional capsule model. The results show larger than expected ρR(CH) values, hinting at the presence of cold fuel-ablator mix.

  20. Error 404 - Document not found

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or...

  1. Trouble Shooting and Error Messages

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output, contact consultants. received node failed or halted event for nid xxxx system One of the compute nodes assigned to the job failed. Resubmit the job. error while loading shared libraries: libxxxx.so: cannot open

  2. Trouble Shooting and Error Messages

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system resubmit the job error with width parameters to aprun user Make sure #PBS -l mppwidth value matches aprun -n value new values for

  3. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. Evaluating operating system vulnerability to memory errors.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  5. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  6. The Radiochemical Analysis of Gaseous Samples (RAGS) apparatus for nuclear diagnostics at the National Ignition Facility (invited)

    SciTech Connect

    Shaughnessy, D. A.; Velsko, C. A.; Jedlovec, D. R.; Yeamans, C. B.; Moody, K. J.; Tereshatov, E.; Stoeffl, W.; Riddle, A.

    2012-10-15

    The Radiochemical Analysis of Gaseous Samples (RAGS) diagnostic apparatus was recently installed at the National Ignition Facility (NIF). Following a NIF shot, RAGS is used to pump the gas load from the NIF chamber for purification and isolation of the noble gases. After collection, the activated gaseous species are counted via gamma spectroscopy for measurement of the capsule areal density and fuel-ablator mix. Collection efficiency was determined by injecting a known amount of {sup 135}Xe into the NIF chamber, which was then collected with RAGS. Commissioning was performed with an exploding pusher capsule filled with isotopically enriched {sup 124}Xe and {sup 126}Xe added to the DT gas fill. Activated xenon species were recovered post-shot and counted via gamma spectroscopy. Results from the collection and commissioning tests are presented. The performance of RAGS allows us to establish a noble gas collection method for measurement of noble gas species produced via neutron and charged particle reactions in a NIF capsule.

  7. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  8. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  9. Error 404 - Document not found

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    govErrors ERROR 404 - URL Not Found We are sorry but the URL that you have requested cannot be found or it is linked to a file that no longer exists. Please check the spelling or send e-mail to WWW Administrator

  10. Monitoring, Controlling and Safeguarding Radiochemical Streams at Spent Fuel Reprocessing Facilities, Part 2: Gamma-Ray Spectroscopic Methods

    SciTech Connect

    Schwantes, Jon M.; Bryan, Samuel A.; Orton, Christopher R.; Levitskaia, Tatiana G.; Fraga, Carlos G.

    2012-02-10

    The International Atomic Energy Agency (IAEA) has established international safeguards standards for fissionable material at spent fuel reprocessing plants to ensure that significant quantities of weapons-useable nuclear material are not diverted from these facilities. For large throughput nuclear facilities, it is difficult to satisfy the IAEA safeguards accountancy goal for detection of abrupt diversion. Currently, methods to verify material control and accountancy (MC&A) at these facilities require time-consuming and resource-intensive destructive assay (DA). Leveraging new on-line non-destructive assay (NDA) process monitoring techniques in conjunction with the traditional and highly precise DA methods may provide an additional measure to nuclear material accountancy which would potentially result in a more timely, cost-effective and resource efficient means for safeguards verification at such facilities. By monitoring process control measurements (e.g. flowrates, temperatures, or concentrations of reagents, products or wastes), abnormal plant operations can be detected. Pacific Northwest National Laboratory (PNNL) is developing on-line NDA process monitoring technologies based upon gamma-ray and optical spectroscopic measurements to potentially reduce the time and resource burden associated with current techniques. The Multi-Isotope Process (MIP) Monitor uses gamma spectroscopy and multivariate analysis to identify off-normal conditions in process streams. The spectroscopic monitor continuously measures chemical compositions of the process streams including actinide metal ions (U, Pu, Np), selected fission products, and major stable flowsheet reagents using UV-Vis, Near IR and Raman spectroscopy. Multi-variate analysis is also applied to the optical measurements in order to quantify concentrations of analytes of interest within a complex array of radiochemical streams. This paper will provide an overview of these methods and reports on-going efforts to develop

  11. Monitoring, Controlling and Safeguarding Radiochemical Streams at Spent Fuel Reprocessing Facilities with Optical and Gamma-Ray Spectroscopic Methods

    SciTech Connect

    Schwantes, Jon M.; Bryan, Samuel A.; Orton, Christopher R.; Levitskaia, Tatiana G.; Fraga, Carlos G.

    2012-11-06

    The International Atomic Energy Agency (IAEA) has established international safeguards standards for fissionable material at spent fuel reprocessing plants to ensure that significant quantities of weapons-useable nuclear material are not diverted from these facilities. For large throughput nuclear facilities, it is difficult to satisfy the IAEA safeguards accountancy goal for detection of abrupt diversion. Currently, methods to verify material control and accountancy (MC&A) at these facilities require time-consuming and resourceintensive destructive assay (DA). Leveraging new on-line non-destructive assay (NDA) process monitoring techniques in conjunction with the traditional and highly precise DA methods may provide an additional measure to nuclear material accountancy which would potentially result in a more timely, cost-effective and resource efficient means for safeguards verification at such facilities. By monitoring process control measurements (e.g. flowrates, temperatures, or concentrations of reagents, products or wastes), abnormal plant operations can be detected. Pacific Northwest National Laboratory (PNNL) is developing on-line NDA process monitoring technologies based upon gamma-ray and optical spectroscopic measurements to potentially reduce the time and resource burden associated with current techniques. The Multi-Isotope Process (MIP) Monitor uses gamma spectroscopy and multivariate analysis to identify offnormal conditions in process streams. The spectroscopic monitor continuously measures chemical compositions of the process streams including actinide metal ions (U, Pu, Np), selected fission products, and major stable flowsheet reagents using UV-Vis, Near IR and Raman spectroscopy. Multi-variate analysis is also applied to the optical measurements in order to quantify concentrations of analytes of interest within a complex array of radiochemical streams. This paper will provide an overview of these methods and reports on-going efforts to develop

  12. Trouble Shooting and Error Messages

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Trouble Shooting and Error Messages Trouble Shooting and Error Messages Error Messages Message or Symptom Fault Recommendation job hit wallclock time limit user or system Submit job for longer time or start job from last checkpoint and resubmit. If your job hung and produced no output contact consultants. received node failed or halted event for nid xxxx system One of the compute nodes assigned to the job failed. Resubmit the job PtlNIInit failed : PTL_NOT_REGISTERED user The executable is from

  13. error | netl.doe.gov

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    error Sorry, there is no www.netl.doe.gov web page that matches your request. It may be possible that you typed the address incorrectly. Connect to National Energy Technology...

  14. 324 Building radiochemical engineering cells, high-level vault, low-level vault, and associated areas closure plan

    SciTech Connect

    Barnett, J.M.

    1998-03-25

    The Hanford Site, located adjacent to and north of Richland, Washington, is operated by the US Department of Energy, Richland Operations Office (RL). The 324 Building is located in the 300 Area of the Hanford Site. The 324 Building was constructed in the 1960s to support materials and chemical process research and development activities ranging from laboratory/bench-scale studies to full engineering-scale pilot plant demonstrations. In the mid-1990s, it was determined that dangerous waste and waste residues were being stored for greater than 90 days in the 324 Building Radiochemical Engineering Cells (REC) and in the High-Level Vault/Low-Level Vault (HLV/LLV) tanks. [These areas are not Resource Conservation and Recovery Act of 1976 (RCRA) permitted portions of the 324 Building.] Through the Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement) Milestone M-89, agreement was reached to close the nonpermitted RCRA unit in the 324 Building. This closure plan, managed under TPA Milestone M-20-55, addresses the identified building areas targeted by the Tri-Party Agreement and provides commitments to achieve the highest degree of compliance practicable, given the special technical difficulties of managing mixed waste that contains high-activity radioactive materials, and the physical limitations of working remotely in the areas within the subject closure unit. This closure plan is divided into nine chapters. Chapter 1.0 provides the introduction, historical perspective, 324 Building history and current mission, and the regulatory basis and strategy for managing the closure unit. Chapters 2.0, 3.0, 4.0, and 5.0 discuss the detailed facility description, process information, waste characteristics, and groundwater monitoring respectively. Chapter 6.0 deals with the closure strategy and performance standard, including the closure activities for the B-Cell, D-Cell, HLV, LLV; piping and miscellaneous associated building areas. Chapter 7.0 addresses the

  15. Improving Memory Error Handling Using Linux

    SciTech Connect

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    2014-07-25

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  16. Intel C++ compiler error: stl_iterator_base_types.h

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    C++ compiler error: stl_iterator_base_types.h Intel C++ compiler error: stl_iterator_base_types.h December 7, 2015 by Scott French Because the system-supplied version of GCC is relatively old (4.3.4) it is common practice to load the gcc module on our Cray systems when C++11 support is required under the Intel C++ compilers. While this works as expected under the GCC 4.8 and 4.9 series compilers, the 5.x series can cause Intel C++ compile-time errors similar to the following:

  17. New Opportunity for Improved Nuclear Forensics, Radiochemical Diagnostics, and Nuclear Astrophysics: Need for a Total-Cross-Section Apparatus at the LANSCE

    SciTech Connect

    Koehler, Paul E.; Hayes-Sterbenz, Anna C.; Bredeweg, Todd Allen; Couture, Aaron J.; Engle, Jonathan; Keksis, August L.; Nortier, Francois M.; Ullmann, John L.

    2014-03-12

    Total-cross-section measurements are feasible on a much wider range of radioactive samples than (n,?) cross-section measurements, and information extracted from the former can be used to set tight constraints on the latter. There are many (n,?) cross sections of great interest to radiochemical diagnostics, nuclear forensics, and nuclear astrophysics which are beyond the reach of current direct measurement, that could be obtained in this way. Our simulations indicate that measurements can be made at the Manuel Lujan Jr. Neutron Scattering Center at the Los Alamos Neutron Science Center for samples as small as 10?g. There are at least 40 high-interest nuclides which should be measurable, including 88Y,167,168,170,171Tm, 173,174Lu, and189,190,192Ir.

  18. Validation of Minor Actinide Cross Sections by Studying Samples Irradiated for 492 Days at the Dounreay Prototype Fast Reactor - I: Radiochemical Analysis

    SciTech Connect

    Shinohara, N. [Japan Atomic Energy Research Institute (Japan); Kohno, N. [Japan Atomic Energy Research Institute (Japan); Nakahara, Y. [Japan Atomic Energy Research Institute (Japan); Tsujimoto, K. [Japan Atomic Energy Research Institute (Japan); Sakurai, T. [Japan Atomic Energy Research Institute (Japan); Mukaiyama, T. [Japan Atomic Energy Research Institute (Japan); Raman, S. [Oak Ridge National Laboratory (United States)

    2003-06-15

    Actinide samples irradiated in the Dounreay Prototype Fast Reactor for 492 effective full-power days were analyzed at Japan Atomic Energy Research Institute by radiochemical methods to measure the isotopic compositions of the fission products (molybdenum, zirconium, and neodymium isotopes) and of the actinides (uranium, neptunium, plutonium, americium, curium, and californium isotopes). In this first of two companion papers, procedures used for chemical analyses and the analyzed data are presented. There is good agreement between the current results and previous results obtained at Oak Ridge National Laboratory. Therefore, these analytical results could serve as a benchmark for future calculations and validation of nuclear data libraries. Such a validation is attempted in the companion paper.

  19. Trace rare earth element analysis of IAEA hair (HH-1), animal bone (H-5) and other biological standards by radiochemical neutron activation

    SciTech Connect

    Lepel, E.A.; Laul, J.C.

    1986-03-01

    A radiochemical neutron activation analysis using a rare earth group separation scheme has been used to measure ultratrace levels of rare earth elements (REE) in IAEA Human Hair (HH-1), IAEA Animal Bone (H-5), NBS Bovine Liver (SRM 1577), and NBS Orchard Leaf (SRM 1571) standards. The REE concentrations in Human Hair and Animal Bone range from 10/sup -8/g/g to 10/sup -11/g/g and their chondritic normalized REE patterns show a negative Eu anomaly and follow as a smooth function of the REE ionic radii. The REE patterns for NBS Bovine Liver and Orchard Leaf are identical except that their concentrations are higher. The similarity among the REE patterns suggest that the REE do not appear to be fractionated during the intake of biological materials by animals or humans. 14 refs., 3 figs., 2 tabs.

  20. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  1. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE PAGES [OSTI]

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number Nspec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of Nspec is ~106 we find that using onlymore » the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in Nspec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the zs – zp distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  2. Field errors in hybrid insertion devices

    SciTech Connect

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  3. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  4. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES [OSTI]

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  5. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  6. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  7. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  8. Error handling strategies in multiphase inverse modeling

    SciTech Connect

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  9. Group representations, error bases and quantum codes

    SciTech Connect

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  10. Linux Kernel Error Detection and Correction

    Energy Science and Technology Software Center

    2007-04-11

    EDAC-utils consists fo a library and set of utilities for retrieving statistics from the Linux Kernel Error Detection and Correction (EDAC) drivers.

  11. Reducing collective quantum state rotation errors with reversible dephasing

    SciTech Connect

    Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.

    2014-12-29

    We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21?dB in the context of collective population measurements of the spin states of an ensemble of 2.110{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.

  12. runtime error message: "readControlMsg: System returned error Connection

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    timed out on TCP socket fd" readControlMsg: System returned error Connection timed out on TCP socket fd" runtime error message: "readControlMsg: System returned error Connection timed out on TCP socket fd" June 30, 2015 Symptom User jobs with sinlge or multiple apruns in a batch script may get this run time error: "readControlMsg: System returned error Connection timed out on TCP socket fd". This problem is intermittent, sometimes resubmit works. This error

  13. Chemical and Radiochemical Composition of Thermally Stabilized Plutonium Oxide from the Plutonium Finishing Plant Considered as Alternate Feedstock for the Mixed Oxide Fuel Fabrication Facility

    SciTech Connect

    Tingey, Joel M.; Jones, Susan A.

    2005-07-01

    Eighteen plutonium oxide samples originating from the Plutonium Finishing Plant (PFP) on the Hanford Site were analyzed to provide additional data on the suitability of PFP thermally stabilized plutonium oxides and Rocky Flats oxides as alternate feedstock to the Mixed Oxide Fuel Fabrication Facility (MFFF). Radiochemical and chemical analyses were performed on fusions, acid leaches, and water leaches of these 18 samples. The results from these destructive analyses were compared with nondestructive analyses (NDA) performed at PFP and the acceptance criteria for the alternate feedstock. The plutonium oxide materials considered as alternate feedstock at Hanford originated from several different sources including Rocky Flats oxide, scrap from the Remote Mechanical C-Line (RMC) and the Plutonium Reclamation Facility (PRF), and materials from other plutonium conversion processes at Hanford. These materials were received at PFP as metals, oxides, and solutions. All of the material considered as alternate feedstock was converted to PuO2 and thermally stabilized by heating the PuO2 powder at 950 C in an oxidizing environment. The two samples from solutions were converted to PuO2 by precipitation with Mg(OH)2. The 18 plutonium oxide samples were grouped into four categories based on their origin. The Rocky Flats oxide was divided into two categories, low- and high-chloride Rocky Flats oxides. The other two categories were PRF/RMC scrap oxides, which included scrap from both process lines and oxides produced from solutions. The two solution samples came from samples that were being tested at Pacific Northwest National Laboratory because all of the plutonium oxide from solutions at PFP had already been processed and placed in 3013 containers. These samples originated at the PFP and are from plutonium nitrate product and double-pass filtrate solutions after they had been thermally stabilized. The other 16 samples originated from thermal stabilization batches before canning at

  14. Wind Power Forecasting Error Distributions over Multiple Timescales (Presentation)

    SciTech Connect

    Hodge, B. M.; Milligan, M.

    2011-07-01

    This presentation presents some statistical analysis of wind power forecast errors and error distributions, with examples using ERCOT data.

  15. Error recovery to enable error-free message transfer between nodes of a computer network

    DOEpatents

    Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.

    2016-01-26

    An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.

  16. Quantum error-correcting codes and devices

    DOEpatents

    Gottesman, Daniel

    2000-10-03

    A method of forming quantum error-correcting codes by first forming a stabilizer for a Hilbert space. A quantum information processing device can be formed to implement such quantum codes.

  17. Parameters and error of a theoretical model

    SciTech Connect

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.

  18. Thermal Hydraulic Simulations, Error Estimation and Parameter

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Thermal Hydraulic Simulations, Error Estimation and Parameter Sensitivity Studies in Drekar::CFD Thomas M. Smith, John N. Shadid, Roger P. Pawlowski, Eric C. Cyr and Timothy M. Wildey Sandia National Laboratories September, 2013 CASL-U-2013-0203-001 SANDIA REPORT SAND2013-XXXX Unlimited Release Printed September 2013 Thermal Hydraulic Simulations, Error Estimation and Parameter Sensitivity Studies in Drekar::CFD Thomas M. Smith, John N. Shadid, Roger P. Pawlowski, Eric C. Cyr and Timothy M.

  19. The role of variation, error, and complexity in manufacturing defects

    SciTech Connect

    Hinckley, C.M.; Barkan, P.

    1994-03-01

    Variation in component properties and dimensions is a widely recognized factor in product defects which can be quantified and controlled by Statistical Process Control methodologies. Our studies have shown, however, that traditional statistical methods are ineffective in characterizing and controlling defects caused by error. The distinction between error and variation becomes increasingly important as the target defect rates approach extremely low values. Motorola data substantiates our thesis that defect rates in the range of several parts per million can only be achieved when traditional methods for controlling variation are combined with methods that specifically focus on eliminating defects due to error. Complexity in the product design, manufacturing processes, or assembly increases the likelihood of defects due to both variation and error. Thus complexity is also a root cause of defects. Until now, the absence of a sound correlation between defects and complexity has obscured the importance of this relationship. We have shown that assembly complexity can be quantified using Design for Assembly (DFA) analysis. High levels of correlation have been found between our complexity measures and defect data covering tens of millions of assembly operations in two widely different industries. The availability of an easily determined measure of complexity, combined with these correlations, permits rapid estimation of the relative defect rates for alternate design concepts. This should prove to be a powerful tool since it can guide design improvement at an early stage when concepts are most readily modified.

  20. Verification of unfold error estimates in the unfold operator code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  1. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES [OSTI]

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  2. Error and uncertainty in Raman thermal conductivity measurements

    SciTech Connect

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  3. WIPP Weatherization: Common Errors and Innovative Solutions Presentation |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy WIPP Weatherization: Common Errors and Innovative Solutions Presentation WIPP Weatherization: Common Errors and Innovative Solutions Presentation This presentation contains information on WIPP Weatherization: Common Errors and Innovative Solutions. WIPP Weatherization: Common Errors and Innovative Solutions (3.58 MB) More Documents & Publications Common Errors and Innovative Solutions Transcript Building America Best Practices Series, Vol. 10 - Retrofit Techniques

  4. Neutron multiplication error in TRU waste measurements

    SciTech Connect

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  5. Superdense coding interleaved with forward error correction

    DOE PAGES [OSTI]

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  6. Laser Phase Errors in Seeded FELs

    SciTech Connect

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  7. Accounting for Model Error in the Calibration of Physical Models...

    Office of Scientific and Technical Information (OSTI)

    Accounting for Model Error in the Calibration of Physical Models. Citation Details In-Document Search Title: Accounting for Model Error in the Calibration of Physical Models. ...

  8. A posteriori error analysis of parameterized linear systems using...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A posteriori error analysis of parameterized linear systems using spectral methods. Citation Details In-Document Search Title: A posteriori error analysis of ...

  9. Intel C++ compiler error: stl_iterator_base_types.h

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    C++ compiler error: stliteratorbasetypes.h Intel C++ compiler error: stliteratorbasetypes.h December 7, 2015 by Scott French Because the system-supplied version of GCC is...

  10. Error estimates for fission neutron outputs (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Error estimates for fission neutron outputs Citation Details In-Document Search Title: Error estimates for fission neutron outputs You are accessing a document from the...

  11. Internal compiler error for function pointer with identically...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott...

  12. Error Estimation for Fault Tolerance in Numerical Integration...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Error Estimation for Fault Tolerance in Numerical Integration Solvers Event Sponsor: ... In numerical integration solvers, approximation error can be estimated at a low cost. We ...

  13. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory Authors: Schunck, N ; McDonnell,...

  14. Error Analysis in Nuclear Density Functional Theory (Journal...

    Office of Scientific and Technical Information (OSTI)

    Error Analysis in Nuclear Density Functional Theory Citation Details In-Document Search Title: Error Analysis in Nuclear Density Functional Theory You are accessing a document...

  15. Raman Thermometry: Comparing Methods to Minimize Error. (Conference...

    Office of Scientific and Technical Information (OSTI)

    Raman Thermometry: Comparing Methods to Minimize Error. Citation Details In-Document Search Title: Raman Thermometry: Comparing Methods to Minimize Error. Abstract not provided....

  16. Common Errors and Innovative Solutions Transcript

    Energy.gov [DOE]

    An example of case studies, mainly by showing photos of errors and good examples, then discussing the purpose of the home energy professional guidelines and certification. There may be more examples of what not to do only because these were good learning opportunities.

  17. Distribution of Wind Power Forecasting Errors from Operational Systems (Presentation)

    SciTech Connect

    Hodge, B. M.; Ela, E.; Milligan, M.

    2011-10-01

    This presentation offers new data and statistical analysis of wind power forecasting errors in operational systems.

  18. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  19. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  20. Errors in response calculations for beams

    SciTech Connect

    Wada, H.; Wurburton, G.B.

    1985-05-01

    When the finite element method is used to idealize a structure, its dynamic response can be determined from the governing matrix equation by the normal mode method or by one of the many approximate direct integration methods. In either method the approximate data of the finite element idealization are used, but further assumptions are introduced by the direct integration scheme. It is the purpose of this paper to study these errors for a simple structure. The transient flexural vibrations of a uniform cantilever beam, which is subjected to a transverse force at the free end, are determined by the Laplace transform method. Comparable responses are obtained for a finite element idealization of the beam, using the normal mode and Newmark average acceleration methods; the errors associated with the approximate methods are studied. If accuracy has priority and the quantity of data is small, the normal mode method is recommended; however, if the quantity of data is large, the Newmark method is useful.

  1. Detecting Soft Errors in Stencil based Computations

    SciTech Connect

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  2. Redundancy and Error Resilience in Boolean Networks

    SciTech Connect

    Peixoto, Tiago P.

    2010-01-29

    We consider the effect of noise in sparse Boolean networks with redundant functions. We show that they always exhibit a nonzero error level, and the dynamics undergoes a phase transition from nonergodicity to ergodicity, as a function of noise, after which the system is no longer capable of preserving a memory of its initial state. We obtain upper bounds on the critical value of noise for networks of different sparsity.

  3. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  4. Shared dosimetry error in epidemiological dose-response analyses

    SciTech Connect

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope ? is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of ?) is biased for ??0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.

  5. Shared dosimetry error in epidemiological dose-response analyses

    DOE PAGES [OSTI]

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-03-23

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed.« less

  6. Fractional charge and spin errors in self-consistent Greens function theory

    SciTech Connect

    Phillips, Jordan J. Kananenka, Alexei A.; Zgid, Dominika

    2015-05-21

    We examine fractional charge and spin errors in self-consistent Greens function theory within a second-order approximation (GF2). For GF2, it is known that the summation of diagrams resulting from the self-consistent solution of the Dyson equation removes the divergences pathological to second-order Mller-Plesset (MP2) theory for strong correlations. In the language often used in density functional theory contexts, this means GF2 has a greatly reduced fractional spin error relative to MP2. The natural question then is what effect, if any, does the Dyson summation have on the fractional charge error in GF2? To this end, we generalize our previous implementation of GF2 to open-shell systems and analyze its fractional spin and charge errors. We find that like MP2, GF2 possesses only a very small fractional charge error, and consequently minimal many electron self-interaction error. This shows that GF2 improves on the critical failings of MP2, but without altering the positive features that make it desirable. Furthermore, we find that GF2 has both less fractional charge and fractional spin errors than typical hybrid density functionals as well as random phase approximation with exchange.

  7. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    SciTech Connect

    Kertzscher, Gustavo Andersen, Claus E.; Tanderup, Kari

    2014-05-15

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was

  8. Error Reduction for Weigh-In-Motion

    SciTech Connect

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  9. Error Reduction in Weigh-In-Motion

    Energy Science and Technology Software Center

    2007-09-21

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bounding and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with lessmore » effort (elimination of redundant weighing)« less

  10. Resolved: "error while loading shared libraries: libalpslli.so...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on...

  11. MPI errors from cray-mpich/7.3.0

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    MPI errors from cray-mpich7.3.0 MPI errors from cray-mpich7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping...

  12. Verification of unfold error estimates in the UFO code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  13. Pressure Change Measurement Leak Testing Errors

    SciTech Connect

    Pryor, Jeff M; Walker, William C

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  14. Locked modes and magnetic field errors in MST

    SciTech Connect

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.

  15. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    SciTech Connect

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  16. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  17. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGES [OSTI]

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  18. A technique for human error analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  19. Polaractivation for classical zero-error capacity of qudit channels

    SciTech Connect

    Gyongyosi, Laszlo; Imre, Sandor

    2014-12-04

    We introduce a new phenomenon for zero-error transmission of classical information over quantum channels that initially were not able for zero-error classical communication. The effect is called polaractivation, and the result is similar to the superactivation effect. We use the Choi-Jamiolkowski isomorphism and the Schmidt-theorem to prove the polaractivation of classical zero-error capacity and define the polaractivator channel coding scheme.

  20. Internal compiler error for function pointer with identically named

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    arguments Internal compiler error for function pointer with identically named arguments Internal compiler error for function pointer with identically named arguments June 9, 2015 by Scott French, NERSC USG Status: Bug 21435 reported to PGI For pgcc versions after 12.x (up through 12.9 is fine, but 13.x and 14.x are not), you may observe an internal compiler error associated with function pointer prototypes when named arguments are used. Specifically, if a function pointer type is defined

  1. Platform-Independent Method for Detecting Errors in Metagenomic...

    Office of Scientific and Technical Information (OSTI)

    Title: Platform-Independent Method for Detecting Errors in Metagenomic Sequencing Data: DRISEE Authors: Keegan, K. P. ; Trimble, W. L. ; Wilkening, J. ; Wilke, A. ; Harrison, T. ; ...

  2. Detecting and correcting hard errors in a memory array

    DOEpatents

    Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.

    2015-11-19

    Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.

  3. Info-Gap Analysis of Truncation Errors in Numerical Simulations...

    Office of Scientific and Technical Information (OSTI)

    Title: Info-Gap Analysis of Truncation Errors in Numerical Simulations. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. ...

  4. Info-Gap Analysis of Numerical Truncation Errors. (Conference...

    Office of Scientific and Technical Information (OSTI)

    Title: Info-Gap Analysis of Numerical Truncation Errors. Authors: Kamm, James R. ; Witkowski, Walter R. ; Rider, William J. ; Trucano, Timothy Guy ; Ben-Haim, Yakov. Publication ...

  5. Confirmation of standard error analysis techniques applied to...

    Office of Scientific and Technical Information (OSTI)

    reported parameter errors are not reliable in many EXAFS studies in the literature. ... Country of Publication: United States Language: English Subject: 75; ABSORPTION; ACCURACY; ...

  6. Accounting for Model Error in the Calibration of Physical Models

    Office of Scientific and Technical Information (OSTI)

    ... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...

  7. Handling Model Error in the Calibration of Physical Models

    Office of Scientific and Technical Information (OSTI)

    ... model error term in locations where key modeling assumptions and approximations are made ... to represent the truth o In this context, the data has no noise o Discrepancy ...

  8. Output-Based Error Estimation and Adaptation for Uncertainty...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Output-Based Error Estimation and Adaptation for Uncertainty Quantification Isaac M. Asher and Krzysztof J. Fidkowski University of Michigan US National Congress on Computational...

  9. WIPP Weatherization: Common Errors and Innovative Solutions Presentati...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    More Documents & Publications Common Errors and Innovative Solutions Transcript Building ... America Best Practices Series: Volume 12. Energy Renovations-Insulation: A Guide for ...

  10. U-058: Apache Struts Conversion Error OGNL Expression Injection...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    in Apache Struts. A remote user can execute arbitrary commands on the target system. PLATFORM: Apache Struts 2.x ABSTRACT: Apache Struts Conversion Error OGNL Expression...

  11. Measurement uncertainty relations

    SciTech Connect

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order ? rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  12. Error localization in RHIC by fitting difference orbits

    SciTech Connect

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  13. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, M

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  14. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES [OSTI]

    Turner, A. J.; Jacob, D. J.

    2015-01-13

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  15. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES [OSTI]

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  16. Laboratory and field studies related to the radionuclide migration project. Progress report, October 1, 1982-September 30, 1983

    SciTech Connect

    Daniels, W.R.; Thompson, J.L.

    1984-04-01

    The FY 1983 laboratory and field studies related to the Radionuclide Migration project are described. Results are presented for radiochemical analyses of water samples collected from the RNM-1 well and the RNM-2S satellite well at the Cambric site. Data are included for tritium, {sup 36}Cl, {sup 85}Kr, {sup 90}Sr, {sup 129}I, and {sup 137}Cs. Preliminary results from water collection at the Cheshire site are reported. Laboratory studies emphasize the sorptive behavior of tuff and its dependence on mineralogy. 18 references, 7 figures, 13 tables.

  17. Link error from craype/2.5.0

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Link error from craype/2.5.0 Link error from craype/2.5.0 January 13, 2016 by Woo-Sun Yang If you build a code using a file called 'configure' with craype/2.5.0, Cray build-tools assumes that you want to use the 'native' link mode (e.g., gcc defaults to dynamic linking), by adding '-Wl,-rpath=/opt/intel/composer_xe_2015/compiler/lib/intel64 -lintlc'. This creates a link error: /usr/bin/ld: cannot find -lintlc A temporary work around is to swap the default craype (2.5.0) with an older or newer

  18. Wind Power Forecasting Error Distributions: An International Comparison; Preprint

    SciTech Connect

    Hodge, B. M.; Lew, D.; Milligan, M.; Holttinen, H.; Sillanpaa, S.; Gomez-Lazaro, E.; Scharff, R.; Soder, L.; Larsen, X. G.; Giebel, G.; Flynn, D.; Dobschinski, J.

    2012-09-01

    Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

  19. Servo control booster system for minimizing following error

    DOEpatents

    Wise, William L.

    1985-01-01

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  20. A Reduction in Systematic Errors of a Bayesian Retrieval Algorithm

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    A Reduction in Systematic Errors of a Bayesian Retrieval Algorithm Seo, Eun-Kyoung Florida State University Liu, Guosheng Florida State University Kim, Kwang-Yul Texas A&M ...

  1. Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint

    SciTech Connect

    Hodge, B. M.; Milligan, M.

    2011-03-01

    In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

  2. Error and tolerance studies for the SSC Linac

    SciTech Connect

    Raparia, D.; Chang, Chu Rui; Guy, F.; Hurd, J.W.; Funk, W.; Crandall, K.R.

    1993-05-01

    This paper summarizes error and tolerance studies for the SSC Linac. These studies also include higher-order multipoles. The codes used in these simulations are PARMTEQ, PARMILA, CCLDYN, PARTRACE, and CCLTRACE.

  3. Quantification of the effects of dependence on human error probabiliti...

    Office of Scientific and Technical Information (OSTI)

    In estimating the probabilities of human error in the performance of a series of tasks in a nuclear power plant, the situation-specific characteristics of the series must be ...

  4. Using doppler radar images to estimate aircraft navigational heading error

    DOEpatents

    Doerry, Armin W.; Jordan, Jay D.; Kim, Theodore J.

    2012-07-03

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  5. A Possible Calorimetric Error in Heavy Water Electrolysis on Platinum

    SciTech Connect

    Shanahan, K.L.

    2001-03-16

    A systematic error in mass flow calorimetry calibration procedures potentially capable of explaining most positive excess power measurements is described. Data recently interpreted as providing evidence of the Pons-Fleischmann effect with a platinum cathode are reinterpreted with the opposite conclusion. This indicates it is premature to conclude platinum displays a Pons and Fleischmann effect, and places the requirement to evaluate the error's magnitude on all mass flow calorimetric experiments.

  6. Visio-Error&OmissionNoClouds.vsd

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Error/Omission Process Process Owner: Department Managers, Corporate Projects and Facilities Projects February 7, 2008 KEY Responsibilities *A/E - Architectural/Engineering Firm *SCR - Sandia Contracting Representative *SDR - Sandia Delegated Representative *E&OB - Errors & Omissions Board * PM - Project Manager * REQ - Requester Facilities Projects Line Item Projects Review Design Findings and Begin Discovery PM Cost Impact? Yes Cost Impact <3% of ICAA? Yes Yes Take Out of Project

  7. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  8. Monte Carlo analysis of localization errors in magnetoencephalography

    SciTech Connect

    Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.

    1989-01-01

    In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.

  9. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES [OSTI]

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  10. MC&A in a Radiochemical Plant

    SciTech Connect

    Crawford, J.M.

    1998-10-30

    MC&A in reprocessing plant in the United States is based on solution measurements of the dissolved fuel assemblies, periodic inventories, and solution measurement of product and waste streams.

  11. Application of human error analysis to aviation and space operations

    SciTech Connect

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  12. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    SciTech Connect

    Li, T.S.; et al.

    2016-01-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  13. Economic penalties of problems and errors in solar energy systems

    SciTech Connect

    Raman, K.; Sparkes, H.R.

    1983-01-01

    Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.

  14. "RSE Table N11.2. Relative Standard Errors for Table N11.2;...

    Energy Information Administration (EIA) (indexed site)

    ... Standard Industrial Classification (SIC) system." " (b) 'Distillate Fuel Oil' includes ... gas obtained from utilities, local distribution companies," "and any other ...

  15. "RSE Table N11.1. Relative Standard Errors for Table N11.1;...

    Energy Information Administration (EIA) (indexed site)

    ... Standard Industrial Classification (SIC) system." " (b) 'Distillate Fuel Oil' includes ... gas obtained from utilities, local distribution companies," "and any other ...

  16. RSE Table 3.5 Relative Standard Errors for Table 3.5

    Energy Information Administration (EIA) (indexed site)

    ...Coke","Waste","Petroleum","or","Wood ... ,,"Total United States" 311,"Food",14,0,28,0,0,0... 324110," Petroleum Refineries",3,0,3,2,0,0,0 324199," ...

  17. "RSE Table C10.1. Relative Standard Errors for Table C10.1;...

    Energy Information Administration (EIA) (indexed site)

    Know" ,,"Total United States" , 311,"Food",3,1,4,2,1,2... 324110," Petroleum Refineries",15,10,36,15,25,44,15,3... Know" ,,"Total United States" , ...

  18. "RSE Table N5.1. Relative Standard Errors for Table N5.1;...

    Energy Information Administration (EIA) (indexed site)

    ","FurnaceCoke"," ","Petroleum","or","Wood ... ,,"Total United States" , 311,"Food",2,0,1,0,0,0... 324110," Petroleum Refineries",4,0,3,6,0,0,24 ...

  19. "RSE Table C12.1. Relative Standard Errors for Table C12.1;...

    Energy Information Administration (EIA) (indexed site)

    ,,"Total United States" , 311,"Food",2,0,2,1,1 ... 324110," Petroleum Refineries",4,0,15,5,12 ... Establishment" ,,"Total United States" , ...

  20. "RSE Table C4.1. Relative Standard Errors for Table C4.1;...

    Energy Information Administration (EIA) (indexed site)

    ,,"Total United States" , 311,"Food",0,0,3,4,1,3... 324,"Petroleum and Coal ... "produced at refineries or natural gas ...

  1. RSE Table 1.1 Relative Standard Errors for Table 1.1

    Energy Information Administration (EIA) (indexed site)

    ... Devices",6,2,0,1,13,42,0,0,27,0 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,42,0 335,"Electrical Equip., Appliances, and ...

  2. RSE Table 1.2 Relative Standard Errors for Table 1.2

    Energy Information Administration (EIA) (indexed site)

    ... Devices",6,2,0,1,13,42,0,0,27,0 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,42,0 335,"Electrical Equip., Appliances, and ...

  3. RSE Table 7.9 Relative Standard Errors for Table 7.9

    Energy Information Administration (EIA) (indexed site)

    ... Devices",4,3,0,1,12,40,0,0,9 335,"Electrical Equip., Appliances, and ... Devices",2,2,0,0,4,19,0,0,17 335,"Electrical Equip., Appliances, and ...

  4. RSE Table N1.1 and N1.2. Relative Standard Errors for Tables...

    Energy Information Administration (EIA) (indexed site)

    ... Devices",3,3,0,2,4,17,0,0,11,0 335,"Electrical Equip., Appliances, and ... Devices",6,6,0,1,7,72,0,0,6,0 335,"Electrical Equip., Appliances, and ...

  5. RSE Table 7.3 Relative Standard Errors for Table 7.3

    Energy Information Administration (EIA) (indexed site)

    ... Devices",2,3,2,13,21,3,57,0,61 335,"Electrical Equip., Appliances, and ... Devices",2,2,4,4,2,7,57,0,61 335,"Electrical Equip., Appliances, and ...

  6. RSE Table 3.2 Relative Standard Errors for Table 3.2

    Energy Information Administration (EIA) (indexed site)

    ... Devices",6,2,0,1,13,42,0,0,51 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,57 335,"Electrical Equip., Appliances, and ...

  7. RSE Table 3.1 Relative Standard Errors for Table 3.1

    Energy Information Administration (EIA) (indexed site)

    ... Devices",6,2,0,1,13,42,0,0,51 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,57 335,"Electrical Equip., Appliances, and ...

  8. RSE Table 7.6 Relative Standard Errors for Table 7.6

    Energy Information Administration (EIA) (indexed site)

    ... Devices",2,2,0,1,13,42,0,0,1 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,42 335,"Electrical Equip., Appliances, and ...

  9. RSE Table N3.1 and N3.2. Relative Standard Errors for Tables...

    Energy Information Administration (EIA) (indexed site)

    ... Devices",3,3,0,2,4,17,0,0,22 335,"Electrical Equip., Appliances, and ... Devices",6,6,0,1,7,72,0,0,3 335,"Electrical Equip., Appliances, and ...

  10. "RSE Table N8.3. Relative Standard Errors for Table N8.3;...

    Energy Information Administration (EIA) (indexed site)

    ... Devices",3,4,7,4,5,6,90,90,0 335,"Electrical Equip., Appliances, and ... Devices",6,6,46,7,8,11,0,0,0 335,"Electrical Equip., Appliances, and ...

  11. "RSE Table C2.1. Relative Standard Errors for Table C2.1;...

    Energy Information Administration (EIA) (indexed site)

    Products",27,0,22,5,0,0,0,27 335,"Electrical Equip., Appliances, and ... Products",27,0,22,5,0,0,0,27 335,"Electrical Equip., Appliances, and ...

  12. RSE Table 7.10 Relative Standard Errors for Table 7.10

    Energy Information Administration (EIA) (indexed site)

    ... Devices",3,3,3,12,19,3,38,0,53 335,"Electrical Equip., Appliances, and ... Devices",2,3,5,4,2,6,38,0,53 335,"Electrical Equip., Appliances, and ...

  13. RSE Table 4.1 Relative Standard Errors for Table 4.1

    Energy Information Administration (EIA) (indexed site)

    ... Devices",6,2,0,1,13,42,0,0,51 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,57 335,"Electrical Equip., Appliances, and ...

  14. RSE Table 4.2 Relative Standard Errors for Table 4.2

    Energy Information Administration (EIA) (indexed site)

    ... Devices",6,2,0,1,13,42,0,0,51 335,"Electrical Equip., Appliances, and ... Devices",4,2,0,0,4,12,0,0,57 335,"Electrical Equip., Appliances, and ...

  15. RSE Table 7.7 Relative Standard Errors for Table 7.7

    Energy Information Administration (EIA) (indexed site)

    ... Devices",2,3,2,13,20,3,57,0,61 335,"Electrical Equip., Appliances, and ... Devices",2,2,4,4,2,7,57,0,61 335,"Electrical Equip., Appliances, and ...

  16. "RSE Table C3.1. Relative Standard Errors for Table C3.1;...

    Energy Information Administration (EIA) (indexed site)

    ... and Office of Oil and Gas, Petroleum" "Supply Division, Form EIA-810, 'Monthly Refinery Report' for 1998." ... and",,"Coke"," " "Code(a)","Subsector and ...

  17. "RSE Table N11.3. Relative Standard Errors for Table N11.3;...

    Energy Information Administration (EIA) (indexed site)

    ... for which" "payment was not made, quantities purchased centrally within the company but separate" "from the reporting establishment, and quantities for which payment was made ...

  18. "RSE Table C11.3. Relative Standard Errors for Table C11.3;...

    Energy Information Administration (EIA) (indexed site)

    ... for which" "payment was not made, quantities purchased centrally within the company but separate" "from the reporting establishment, and quantities for which payment was made ...

  19. RSE Table 10.13 Relative Standard Errors for Table 10.13

    Energy Information Administration (EIA) (indexed site)

    ... for which payment was made," "quantities transferred in, quantities purchased and paid for by a central" "purchasing entity, and quantities for which payment was made in kind. ...

  20. "RSE Table N11.4. Relative Standard Errors for Table N11.4;...

    Energy Information Administration (EIA) (indexed site)

    ... for which" "payment was not made, quantities purchased centrally within the company but separate" "from the reporting establishment, and quantities for which payment was made ...

  1. RSE Table 10.12 Relative Standard Errors for Table 10.12

    Energy Information Administration (EIA) (indexed site)

    ... for which payment was made," "quantities transferred in, quantities purchased and paid for by a central" "purchasing entity, and quantities for which payment was made in kind. ...

  2. Table 3b. Relative Standard Errors for Total Natural Gas Consumption...

    Energy Information Administration (EIA) (indexed site)

    13 13 200,001 to 500,000 11 21 16 16 Over 500,000 15 27 22 23 Principal Building Activity Education 12 11 9 8 Food Sales and Service 8 12 10 9 Health Care 15 21 17 13 Lodging 12 22...

  3. Table 5b. Relative Standard Errors for Total District Heat Consumption...

    Energy Information Administration (EIA) (indexed site)

    35 36 200,001 to 500,000 22 31 26 27 Over 500,000 42 26 14 10 Principal Building Activity Education 17 29 22 23 Food Sales and Service 67 93 207 150 Health Care 35 26 25 14 Lodging...

  4. "RSE Table E13.2. Relative Standard Errors for Table E13.2;...

    Energy Information Administration (EIA) (indexed site)

    ... sources." "Noncombustible sources include solar power, wind power, hydropower, and" ... percentage is provided" "for each table cell." " Source: Energy Information ...

  5. RSE Table 10.10 Relative Standard Errors for Table 10.10

    Energy Information Administration (EIA) (indexed site)

    ...(c)","Switchable","Switchable","Receipts(d)","Gas","Fuel Oil","Fuel Oil","LPG","Other(e)" ,,"Total United States" 311,"Food",6,18,5,0,20,85,29,20,0 311221," Wet Corn ...

  6. RSE Table 10.11 Relative Standard Errors for Table 10.11

    Energy Information Administration (EIA) (indexed site)

    ...(d)","Switchable","Switchable","Receipts(e)","Gas","Fuel Oil","Fuel Oil","LPG","Other(f)" ,,"Total United States" 311,"Food",20,32,21,0,16,68,65,73,0 311221," Wet Corn ...

  7. "RSE Table E1.1. Relative Standard Errors for Table E1.1;...

    Energy Information Administration (EIA) (indexed site)

    ... and Gas, Petroleum" "Supply Division, Form EIA-810, 'Monthly Refinery Report' for 1998, and the Bureau" "of the Census, data files for the '1998 Annual Survey of Manufactures.'" ...

  8. "RSE Table E2.1. Relative Standard Errors for Table E2.1;...

    Energy Information Administration (EIA) (indexed site)

    ... and Gas, Petroleum" "Supply Division, Form EIA-810, 'Monthly Refinery Report' for 1998, and the Bureau" "of the Census, data files for the '1998 Annual Survey of Manufactures.'" ...

  9. "RSE Table E13.3. Relative Standard Errors for Table E13.3;...

    Energy Information Administration (EIA) (indexed site)

    ... Consumption Division, Form EIA-846, '1998 Manufacturing" "Energy Consumption Survey,' and the Bureau of the Census," "data files for the '1998 Annual Survey of Manufactures.'" ...

  10. RSE Table E8.1 and E8.2. Relative Standard Errors for Tables...

    Energy Information Administration (EIA) (indexed site)

    ... Consumption Division, Form EIA-846, '1998 Manufacturing" "Energy Consumption Survey,' and the Bureau of the Census," "data files for the '1998 Annual Survey of Manufactures.'" ...

  11. "RSE Table E13.1. Relative Standard Errors for Table E13.1;...

    Energy Information Administration (EIA) (indexed site)

    ... Consumption Division, Form EIA-846, '1998 Manufacturing" "Energy Consumption Survey,' and the Bureau of the Census," "data files for the '1998 Annual Survey of Manufactures.'" ...

  12. "RSE Table E7.1. Relative Standard Errors for Table E7.1;...

    Energy Information Administration (EIA) (indexed site)

    ... Consumption Division, Form EIA-846, '1998 Manufacturing" "Energy Consumption Survey,' and the Bureau of the Census," "data files for the '1998 Annual Survey of Manufactures.'" ...

  13. When soft controls get slippery: User interfaces and human error

    SciTech Connect

    Stubler, W.F.; O`Hara, J.M.

    1998-12-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety.

  14. MPI errors from cray-mpich/7.3.0

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    MPI errors from cray-mpich/7.3.0 MPI errors from cray-mpich/7.3.0 January 6, 2016 by Ankit Bhagatwala A change in the MPICH2 library that now strictly enforces non-overlapping buffers in MPI collectives may cause some MPI applications that use overlapping buffers to fail at runtime. As an example, one of the routines affected is MPI_ALLGATHER. There are several possible fixes. The cleanest one is to specify MPI_IN_PLACE instead of the address of the send buffer for cases where sendbuf and

  15. Scalable error correction in distributed ion trap computers

    SciTech Connect

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-11-15

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment.

  16. JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned

    SciTech Connect

    Frank Marhauser

    2011-09-01

    Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.

  17. Laser Phase Errors in Seeded Free Electron Lasers

    SciTech Connect

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-04-17

    Harmonic seeding of free electron lasers has attracted significant attention as a method for producing transform-limited pulses in the soft x-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality and impede production of transform-limited pulses. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  18. V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    ISC BIND RUNTIMECHECK Error Lets Remote Users Deny Service Against Recursive Resolvers V-172: ISC BIND RUNTIMECHECK Error Lets Remote Users Deny Service Against Recursive...

  19. Contributions to Human Errors and Breaches in National Security Applications.

    SciTech Connect

    Pond, D. J.; Houghton, F. K.; Gilmore, W. E.

    2002-01-01

    Los Alamos National Laboratory has recognized that security infractions are often the consequence of various types of human errors (e.g., mistakes, lapses, slips) and/or breaches (i.e., deliberate deviations from policies or required procedures with no intention to bring about an adverse security consequence) and therefore has established an error reduction program based in part on the techniques used to mitigate hazard and accident potentials. One cornerstone of this program, definition of the situational and personal factors that increase the likelihood of employee errors and breaches, is detailed here. This information can be used retrospectively (as in accident investigations) to support and guide inquiries into security incidents or prospectively (as in hazard assessments) to guide efforts to reduce the likelihood of error/incident occurrence. Both approaches provide the foundation for targeted interventions to reduce the influence of these factors and for the formation of subsequent 'lessons learned.' Overall security is enhanced not only by reducing the inadvertent releases of classified information but also by reducing the security and safeguards resources devoted to them, thereby allowing these resources to be concentrated on acts of malevolence.

  20. Shape error analysis for reflective nano focusing optics

    SciTech Connect

    Modi, Mohammed H.; Idir, Mourad

    2010-06-23

    Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.

  1. Servo control booster system for minimizing following error

    DOEpatents

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  2. Error Detection and Correction LDMS Plugin Version 1.0

    SciTech Connect

    Shoga, Kathleen; Allan, Ben

    2015-11-02

    Sandia's Lightweight Distributed Metric Service (LDMS) is a data collection and transport system used at Livermore Computing to gather performance data across the center. While Sandia has a set of plugins available, they do not include all the data we need to capture. The ECAC plugin that we have developed enables collection of the Error Detection and Correction (EDAC) counters.

  3. The contour method cutting assumption: error minimization and correction

    SciTech Connect

    Prime, Michael B; Kastengren, Alan L

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  4. Error field and magnetic diagnostic modeling for W7-X

    SciTech Connect

    Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  5. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    SciTech Connect

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  6. Numerical errors in the presence of steep topography: analysis and alternatives

    SciTech Connect

    Lundquist, K A; Chow, F K; Lundquist, J K

    2010-04-15

    It is well known in computational fluid dynamics that grid quality affects the accuracy of numerical solutions. When assessing grid quality, properties such as aspect ratio, orthogonality of coordinate surfaces, and cell volume are considered. Mesoscale atmospheric models generally use terrain-following coordinates with large aspect ratios near the surface. As high resolution numerical simulations are increasingly used to study topographically forced flows, a high degree of non-orthogonality is introduced, especially in the vicinity of steep terrain slopes. Numerical errors associated with the use of terrainfollowing coordinates can adversely effect the accuracy of the solution in steep terrain. Inaccuracies from the coordinate transformation are present in each spatially discretized term of the Navier-Stokes equations, as well as in the conservation equations for scalars. In particular, errors in the computation of horizontal pressure gradients, diffusion, and horizontal advection terms have been noted in the presence of sloping coordinate surfaces and steep topography. In this work we study the effects of these spatial discretization errors on the flow solution for three canonical cases: scalar advection over a mountain, an atmosphere at rest over a hill, and forced advection over a hill. This study is completed using the Weather Research and Forecasting (WRF) model. Simulations with terrain-following coordinates are compared to those using a flat coordinate, where terrain is represented with the immersed boundary method. The immersed boundary method is used as a tool which allows us to eliminate the terrain-following coordinate transformation, and quantify numerical errors through a direct comparison of the two solutions. Additionally, the effects of related issues such as the steepness of terrain slope and grid aspect ratio are studied in an effort to gain an understanding of numerical domains where terrain-following coordinates can successfully be used and

  7. Errors in determination of soil water content using time-domain reflectometry caused by soil compaction around wave guides

    SciTech Connect

    Ghezzehei, T.A.

    2008-05-29

    Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.

  8. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    Reports and Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  9. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE PAGES [OSTI]

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; de Supinski, Bronis R.; Müller, Matthias S.

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require 𝒪( p ) analysis timemore » per MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  10. Method and system for reducing errors in vehicle weighing systems

    DOEpatents

    Hively, Lee M.; Abercrombie, Robert K.

    2010-08-24

    A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.

  11. Reducing Soft-error Vulnerability of Caches using Data Compression

    SciTech Connect

    Vetter, Jeffrey S

    2016-01-01

    With ongoing chip miniaturization and voltage scaling, particle strike-induced soft errors present increasingly severe threat to the reliability of on-chip caches. In this paper, we present a technique to reduce the vulnerability of caches to soft-errors. Our technique uses data compression to reduce the number of vulnerable data bits in the cache and performs selective duplication of more critical data-bits to provide extra protection to them. Microarchitectural simulations have shown that our technique is effective in reducing architectural vulnerability factor (AVF) of the cache and outperforms another technique. For single and dual-core system configuration, the average reduction in AVF is 5.59X and 8.44X, respectively. Also, the implementation and performance overheads of our technique are minimal and it is useful for a broad range of workloads.

  12. Comparison of Wind Power and Load Forecasting Error Distributions: Preprint

    SciTech Connect

    Hodge, B. M.; Florita, A.; Orwig, K.; Lew, D.; Milligan, M.

    2012-07-01

    The introduction of large amounts of variable and uncertain power sources, such as wind power, into the electricity grid presents a number of challenges for system operations. One issue involves the uncertainty associated with scheduling power that wind will supply in future timeframes. However, this is not an entirely new challenge; load is also variable and uncertain, and is strongly influenced by weather patterns. In this work we make a comparison between the day-ahead forecasting errors encountered in wind power forecasting and load forecasting. The study examines the distribution of errors from operational forecasting systems in two different Independent System Operator (ISO) regions for both wind power and load forecasts at the day-ahead timeframe. The day-ahead timescale is critical in power system operations because it serves the unit commitment function for slow-starting conventional generators.

  13. Some aspects of statistical modeling of human-error probability

    SciTech Connect

    Prairie, R. R.

    1982-01-01

    Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element.

  14. Posters The Impacts of Data Error and Model Resolution

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    7 Posters The Impacts of Data Error and Model Resolution on the Result of Variational Data Assimilation S. Yang and Q. Xu Cooperative Institute of Mesoscale Meteorological Studies University of Oklahoma Norman, Oklahoma Introduction The representativeness and accuracy of the measurements or estimates of the lateral boundary fluxes and surface fluxes are crucial for the single-column model and budget studies of climatic variables over Atmospheric Radiation Measurement (ARM) sites. Since the

  15. L-TERRA (LIDAR Turbulence Error Reduction Algorithm) - Energy Innovation

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Portal Wind Energy Wind Energy Find More Like This Return to Search L-TERRA (LIDAR Turbulence Error Reduction Algorithm) National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary Wind resource assessment and turbine power performance testing are typically conducted through the use of instruments on meteorological towers. Recently, LIDAR (light detection and ranging) instruments have started to replace the use of meteorological towers for these

  16. Runtime Detection of C-Style Errors in UPC Code

    SciTech Connect

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  17. SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection

    SciTech Connect

    Kalet, A; Phillips, M; Gennari, J

    2014-06-01

    Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the

  18. Unconventional Rotor Power Response to Yaw Error Variations

    DOE PAGES [OSTI]

    Schreck, S. J.; Schepers, J. G.

    2014-12-16

    Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.

  19. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soils characteristics. Most often, spatial variability in the soils fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soils fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted

  20. Error-field penetration in reversed magnetic shear configurations

    SciTech Connect

    Wang, H. H.; Wang, Z. X.; Wang, X. Q.; Wang, X. G.

    2013-06-15

    Error-field penetration in reversed magnetic shear (RMS) configurations is numerically investigated by using a two-dimensional resistive magnetohydrodynamic model in slab geometry. To explore different dynamic processes in locked modes, three equilibrium states are adopted. Stable, marginal, and unstable current profiles for double tearing modes are designed by varying the current intensity between two resonant surfaces separated by a certain distance. Further, the dynamic characteristics of locked modes in the three RMS states are identified, and the relevant physics mechanisms are elucidated. The scaling behavior of critical perturbation value with initial plasma velocity is numerically obtained, which obeys previously established relevant analytical theory in the viscoresistive regime.

  1. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    SciTech Connect

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.

  2. Feasibility of neuro-morphic computing to emulate error-conflict based decision making.

    SciTech Connect

    Branch, Darren W.

    2009-09-01

    A key aspect of decision making is determining when errors or conflicts exist in information and knowing whether to continue or terminate an action. Understanding the error-conflict processing is crucial in order to emulate higher brain functions in hardware and software systems. Specific brain regions, most notably the anterior cingulate cortex (ACC) are known to respond to the presence of conflicts in information by assigning a value to an action. Essentially, this conflict signal triggers strategic adjustments in cognitive control, which serve to prevent further conflict. The most probable mechanism is the ACC reports and discriminates different types of feedback, both positive and negative, that relate to different adaptations. Unique cells called spindle neurons that are primarily found in the ACC (layer Vb) are known to be responsible for cognitive dissonance (disambiguation between alternatives). Thus, the ACC through a specific set of cells likely plays a central role in the ability of humans to make difficult decisions and solve challenging problems in the midst of conflicting information. In addition to dealing with cognitive dissonance, decision making in high consequence scenarios also relies on the integration of multiple sets of information (sensory, reward, emotion, etc.). Thus, a second area of interest for this proposal lies in the corticostriatal networks that serve as an integration region for multiple cognitive inputs. In order to engineer neurological decision making processes in silicon devices, we will determine the key cells, inputs, and outputs of conflict/error detection in the ACC region. The second goal is understand in vitro models of corticostriatal networks and the impact of physical deficits on decision making, specifically in stressful scenarios with conflicting streams of data from multiple inputs. We will elucidate the mechanisms of cognitive data integration in order to implement a future corticostriatal-like network in silicon

  3. THE MASS-RICHNESS RELATION OF MaxBCG CLUSTERS FROM QUASAR LENSING...

    Office of Scientific and Technical Information (OSTI)

    with other methods is not due to a shear-related systematic measurement error. We study the dependence of the ... Country of Publication: United States Language: English ...

  4. Inherent Errors Associated with Raman Based Thermal Conductivity...

    Office of Scientific and Technical Information (OSTI)

    Resource Relation: Conference: AEE Student Symposium held August 31, 2012 in Albuqueruqe, NM.; Related Information: Proposed for presentation at the AEE Student Symposium held ...

  5. FlipSphere: A Software-based DRAM Error Detection and Correction...

    Office of Scientific and Technical Information (OSTI)

    FlipSphere: A Software-based DRAM Error Detection and Correction Library for HPC. Citation Details In-Document Search Title: FlipSphere: A Software-based DRAM Error Detection and ...

  6. V-194: Citrix XenServer Memory Management Error Lets Local Administrat...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    XenServer Memory Management Error Lets Local Administrative Users on the Guest Gain Access on the Host V-194: Citrix XenServer Memory Management Error Lets Local Administrative...

  7. Resolved: "error while loading shared libraries: libalpslli.so.0" with

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    serial codes on login nodes "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes Resolved: "error while loading shared libraries: libalpslli.so.0" with serial codes on login nodes December 13, 2013 by Helen He Symptom: Dynamic executables built with compiler wrappers running directly on the external login nodes are getting the following error message: % ftn -dynamic -o testf testf.f % ./testf ./testf: error while loading shared

  8. T-719:Apache mod_proxy_ajp HTTP Processing Error Lets Remote Users Deny Service

    Energy.gov [DOE]

    A remote user can cause the backend server to remain in an error state until the retry timeout expires.

  9. Coordinated joint motion control system with position error correction

    DOEpatents

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  10. Coordinated joint motion control system with position error correction

    DOEpatents

    Danko, George

    2011-11-22

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  11. Error field penetration and locking to the backward propagating wave

    DOE PAGES [OSTI]

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  12. Error field penetration and locking to the backward propagating wave

    SciTech Connect

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  13. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES [OSTI]

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  14. A Posteriori Error Analysis and Adaptive Construction of Surrogate...

    Office of Scientific and Technical Information (OSTI)

    Resource Relation: Conference: FEMTEC Conference held May 19-24, 2013 in Las Vegas, NV.; Related Information: Proposed for presentation at the FEMTEC Conference held May 19-24, ...

  15. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES [OSTI]

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; Thompson, Aidan P.; Trott, Christian

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  16. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    SciTech Connect

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; Thompson, Aidan P.; Trott, Christian

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on the characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.

  17. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE PAGES [OSTI]

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; Vogelmann, Andrew M.

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  18. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  19. In vivo enzyme activity in inborn errors of metabolism

    SciTech Connect

    Thompson, G.N.; Walter, J.H.; Leonard, J.V.; Halliday, D. )

    1990-08-01

    Low-dose continuous infusions of (2H5)phenylalanine, (1-13C)propionate, and (1-13C)leucine were used to quantitate phenylalanine hydroxylation in phenylketonuria (PKU, four subjects), propionate oxidation in methylmalonic acidaemia (MMA, four subjects), and propionic acidaemia (PA, four subjects) and leucine oxidation in maple syrup urine disease (MSUD, four subjects). In vivo enzyme activity in PKU, MMA, and PA subjects was similar to or in excess of that in adult controls (range of phenylalanine hydroxylation in PKU, 3.7 to 6.5 mumol/kg/h, control 3.2 to 7.9, n = 7; propionate oxidation in MMA, 15.2 to 64.8 mumol/kg/h, and in PA, 11.1 to 36.0, control 5.1 to 19.0, n = 5). By contrast, in vivo leucine oxidation was undetectable in three of the four MSUD subjects (less than 0.5 mumol/kg/h) and negligible in the remaining subject (2 mumol/kg/h, control 10.4 to 15.7, n = 6). These results suggest that significant substrate removal can be achieved in some inborn metabolic errors either through stimulation of residual enzyme activity in defective enzyme systems or by activation of alternate metabolic pathways. Both possibilities almost certainly depend on gross elevation of substrate concentrations. By contrast, only minimal in vivo oxidation of leucine appears possible in MSUD.

  20. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    SciTech Connect

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  1. Handling Model Error in the Calibration of Physical Models. ...

    Office of Scientific and Technical Information (OSTI)

    Type: Conference Resource Relation: Conference: Proposed for presentation at the 15th International Conference on Numerical Combustion held April 19-22, 2015 in Avignon, France

  2. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  3. Spectral Characteristics of Background Error Covariance and Multiscale Data Assimilation: Background Error Covariance and Multiscale Data Assimilation

    DOE PAGES [OSTI]

    Li, Zhijin; Cheng, Xiaoping; Gustafson, William I.; Vogelmann, Andrew M.

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  4. The impact of response measurement error on the analysis of designed experiments

    SciTech Connect

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

  5. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES [OSTI]

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  6. Nonlocal reactive transport with physical and chemical heterogeneity: Localization errors

    SciTech Connect

    Cushman, J.H.; Hu, B.X.; Deng, F.W.

    1995-09-01

    The origin of nonlocality in {open_quotes}macroscale{close_quotes} models for subsurface chemical transport is illustrated. It is argued that media that are either nonperiodic (e.g., media with evolving heterogeneity) or periodic viewed on a scale wherein a unit cell is discernible must display some nonlocality in the mean. A metaphysical argument suggests that owing to the scarcity of information on natural scales of heterogeneity and on scales of observation associated with an instrument window, constitutive theories for the mean concentration should at the outset of any modeling effort always be considered nonlocal. The intuitive appeal to nonlocality is reinforced with an analytical derivation of the constitutive theory for a conservative tracer without appeal to any mathematical approximations. Comparisons are made between the fully nonlocal (FNL), nonlocal in time (NLT), and fully localized (FL) theories. For conservative transport, there is little difference between the first-order FL and FNL models for spatial moments up to and including the third. However, for conservative transport the first-order NLT model differs significantly from the FNL model in the third spatial moments. For reactive transport, all spatial moments differ between the FNL and FL models. The second transverse-horizontal and third longitudinal-horizontal moments for the NLT model differ from the FNL model. These results suggest that localized first-order transport models for conservative tracers are reasonable if only lower-order moments are desired. However, when the chemical reacts with its environment, the localization approximation can lead to significant error in all moments, and a FNL model will in general be required for accurate simulation. 18 refs., 9 figs., 1 tab.

  7. Related Publications

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Skip navigation links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings...

  8. Labor Relations

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Relations also provides technical assistance to, and coordination of, the Partnership Council and other labor-management forums. Collective Bargaining Agreements BPA AFGE...

  9. Investor Relations

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    and related services at cost. BPA Overview for Investors - as of September 24, 2015 Credit Ratings Latest Rating Agency Reports Full Reports: Fitch Full Report, March 2014...

  10. Method and apparatus for detecting timing errors in a system oscillator

    DOEpatents

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  11. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  12. V-172: ISC BIND RUNTIME_CHECK Error Lets Remote Users Deny Service Against Recursive Resolvers

    Energy.gov [DOE]

    A defect exists which allows an attacker to crash a BIND 9 recursive resolver with a RUNTIME_CHECK error in resolver.c

  13. Correcting incompatible DN values and geometric errors in nighttime...

    Office of Scientific and Technical Information (OSTI)

    DOE Contract Number: AC05-76RL01830 Resource Type: Journal Article Resource Relation: Journal Name: IEEE Transactions on Geoscience and Remote Sensing, 53(4):2039-2049 Research ...

  14. Relational Blackboard

    Energy Science and Technology Software Center

    2012-09-11

    The Relational Blackboard (RBB) is an extension of the H2 Relational Database to support discrete events and timeseries data. The original motivation for RBB is as a knowledge base for cognitive systems and simulations. It is useful wherever there is a need for persistent storage of timeseries (i.e. samples of a continuous process generating numerical data) and semantic labels for the data. The RBB is an extension to the H2 Relational Database, which is open-source.more » RBB is a set of stored procedures for H2 allowing data to be labeled, queried, and resampled.« less

  15. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    SciTech Connect

    Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric

    2015-04-01

    The Defense Meteorological Satellite Programs Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.

  16. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    SciTech Connect

    Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Rd. WT15, Richardson, Texas 75080 (United States)

    2009-05-15

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

  17. Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow

    SciTech Connect

    Shadday, Martin A., Jr.

    1997-03-20

    The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated.

  18. Compilation error with cray-petsc/3.6.1.0

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Compilation error with cray-petsc3.6.1.0 Compilation error with cray-petsc3.6.1.0 January 5, 2016 The current default cray-petsc module, cray-petsc3.6.1.0, does not work with...

  19. The cce/8.3.0 C++ compiler may run into a linking error on Edison

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    The cce8.3.0 C++ compiler may run into a linking error on Edison The cce8.3.0 C++ compiler may run into a linking error on Edison July 1, 2014 You may run into the following...

  20. A Case for Soft Error Detection and Correction in Computational Chemistry

    SciTech Connect

    van Dam, Hubertus JJ; Vishnu, Abhinav; De Jong, Wibe A.

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of the them will mean that the mean time between failures will become so short that most applications runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  1. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    DOEpatents

    Almasi, Gheorghe [Ardsley, NY; Blumrich, Matthias Augustin [Ridgefield, CT; Chen, Dong [Croton-On-Hudson, NY; Coteus, Paul [Yorktown, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E. [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk I. [Ossining, NY; Singh, Sarabjeet [Mississauga, CA; Steinmacher-Burow, Burkhard D. [Wernau, DE; Takken, Todd [Brewster, NY; Vranas, Pavlos [Bedford Hills, NY

    2008-06-03

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.

  2. Modern Palliative Radiation Treatment: Do Complexity and Workload Contribute to Medical Errors?

    SciTech Connect

    D'Souza, Neil; Holden, Lori; Robson, Sheila; Mah, Kathy; Di Prospero, Lisa; Wong, C. Shun; Chow, Edward; Spayne, Jacqueline

    2012-09-01

    Purpose: To examine whether treatment workload and complexity associated with palliative radiation therapy contribute to medical errors. Methods and Materials: In the setting of a large academic health sciences center, patient scheduling and record and verification systems were used to identify patients starting radiation therapy. All records of radiation treatment courses delivered during a 3-month period were retrieved and divided into radical and palliative intent. 'Same day consultation, planning and treatment' was used as a proxy for workload and 'previous treatment' and 'multiple sites' as surrogates for complexity. In addition, all planning and treatment discrepancies (errors and 'near-misses') recorded during the same time frame were reviewed and analyzed. Results: There were 365 new patients treated with 485 courses of palliative radiation therapy. Of those patients, 128 (35%) were same-day consultation, simulation, and treatment patients; 166 (45%) patients had previous treatment; and 94 (26%) patients had treatment to multiple sites. Four near-misses and 4 errors occurred during the audit period, giving an error per course rate of 0.82%. In comparison, there were 10 near-misses and 5 errors associated with 1100 courses of radical treatment during the audit period. This translated into an error rate of 0.45% per course. An association was found between workload and complexity and increased palliative therapy error rates. Conclusions: Increased complexity and workload may have an impact on palliative radiation treatment discrepancies. This information may help guide the necessary recommendations for process improvement for patients who require palliative radiation therapy.

  3. ADEPT, a dynamic next generation sequencing data error-detection program with trimming

    DOE PAGES [OSTI]

    Feng, Shihai; Lo, Chien-Chi; Li, Po-E; Chain, Patrick S. G.

    2016-02-29

    Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less

  4. How Radiation Oncologists Would Disclose Errors: Results of a Survey of Radiation Oncologists and Trainees

    SciTech Connect

    Evans, Suzanne B.; Yu, James B.; Chagpar, Anees

    2012-10-01

    Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses: 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.

  5. A two-dimensional matrix correction for off-axis portal dose prediction errors

    SciTech Connect

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in

  6. CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION

    SciTech Connect

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.

    2012-07-15

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  7. T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Execute Arbitrary Code | Department of Energy 5: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code T-545: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code January 28, 2011 - 7:21am Addthis PROBLEM: RealPlayer Heap Corruption Error in 'vidplin.dll' Lets Remote Users Execute Arbitrary Code. PLATFORM: RealPlayer 14.0.1 and prior versions ABSTRACT: A vulnerability was reported in RealPlayer. A remote user can

  8. "Show Preview" button is not working; gives error | OpenEI Community

    OpenEI (Open Energy Information) [EERE & EIA]

    "Show Preview" button is not working; gives error Home > Groups > Utility Rate Submitted by Ewilson on 3 January, 2013 - 09:52 1 answer Points: 0 Eric, thanks for reporting this. I...

  9. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

    SciTech Connect

    Nyman, Peter

    2009-03-10

    This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

  10. V-228: RealPlayer Buffer Overflow and Memory Corruption Error...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    V-228: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote Users Execute ... Lets Remote Users Execute Arbitrary Code V-049: RealPlayer Buffer Overflow and Invalid ...

  11. V-109: Google Chrome WebKit Type Confusion Error Lets Remote...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Google Chrome WebKit Type Confusion Error Lets Remote Users Execute Arbitrary Code PLATFORM: Google Chrome prior to 25.0.1364.160 ABSTRACT: A vulnerability was reported in...

  12. Progress in Understanding Error-field Physics in NSTX Spherical Torus Plasmas

    SciTech Connect

    E. Menard, R.E. Bell, D.A. Gates, S.P. Gerhardt, J.-K. Park, S.A. Sabbagh, J.W. Berkery, A. Egan, J. Kallman, S.M. Kaye, B. LeBlanc, Y.Q. Liu, A. Sontag, D. Swanson, H. Yuh, W. Zhu and the NSTX Research Team

    2010-05-19

    The low aspect ratio, low magnetic field, and wide range of plasma beta of NSTX plasmas provide new insight into the origins and effects of magnetic field errors. An extensive array of magnetic sensors has been used to analyze error fields, to measure error field amplification, and to detect resistive wall modes in real time. The measured normalized error-field threshold for the onset of locked modes shows a linear scaling with plasma density, a weak to inverse dependence on toroidal field, and a positive scaling with magnetic shear. These results extrapolate to a favorable error field threshold for ITER. For these low-beta locked-mode plasmas, perturbed equilibrium calculations find that the plasma response must be included to explain the empirically determined optimal correction of NSTX error fields. In high-beta NSTX plasmas exceeding the n=1 no-wall stability limit where the RWM is stabilized by plasma rotation, active suppression of n=1 amplified error fields and the correction of recently discovered intrinsic n=3 error fields have led to sustained high rotation and record durations free of low-frequency core MHD activity. For sustained rotational stabilization of the n=1 RWM, both the rotation threshold and magnitude of the amplification are important. At fixed normalized dissipation, kinetic damping models predict rotation thresholds for RWM stabilization to scale nearly linearly with particle orbit frequency. Studies for NSTX find that orbit frequencies computed in general geometry can deviate significantly from those computed in the high aspect ratio and circular plasma cross-section limit, and these differences can strongly influence the predicted RWM stability. The measured and predicted RWM stability is found to be very sensitive to the E B rotation profile near the plasma edge, and the measured critical rotation for the RWM is approximately a factor of two higher than predicted by the MARS-F code using the semi-kinetic damping model.

  13. V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Login Anonymously | Department of Energy 5: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously V-235: Cisco Mobility Services Engine Configuration Error Lets Remote Users Login Anonymously September 5, 2013 - 12:33am Addthis PROBLEM: A vulnerability was reported in Cisco Mobility Services Engine. A remote user can login anonymously. PLATFORM: Cisco Mobility Services Engine ABSTRACT: A vulnerability in Cisco Mobility Services Engine could allow an

  14. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  15. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    Weekly Natural Gas Storage Report

    U.S. Energy Information Administration | Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report 1 February 2016 Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report The U.S. Energy Information Administration (EIA) collects and publishes natural gas storage information on a monthly and weekly basis. The Form EIA-191, Monthly Underground Natural Gas Storage Report, is a census survey that collects field-level

  16. In Situ Validation of a Correction for Time-Lag and Bias Errors in Vaisala RS80-H Radiosonde Humidity Measurements

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    In Situ Validation of a Correction for Time-Lag and Bias Errors in Vaisala RS80-H Radiosonde Humidity Measurements L. M. Miloshevich National Center for Atmospheric Research Boulder, Colorado H. Vömel and S. J. Oltmans National Oceanic and Atmospheric Administration Boulder, Colorado A. Paukkunen Vaisala Oy Helsinki, Finland Introduction Radiosonde relative humidity (RH) measurements are fundamentally important to Atmospheric Radiation Measurement (ARM) Program goals because they are used in a

  17. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES [OSTI]

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; et al

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr₋1 in the 1960s to 0.3 Pg C yr₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr₋1 in the 1960s to almost 1.0 Pg C yr₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the

  18. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    SciTech Connect

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr₋1 in the 1960s to 0.3 Pg C yr₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr₋1 in the 1960s to almost 1.0 Pg C yr₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half

  19. Quantifying the Effect of Lidar Turbulence Error on Wind Power Prediction

    SciTech Connect

    Newman, Jennifer F.; Clifton, Andrew

    2016-01-01

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST

  20. Labor Relations

    Energy.gov [DOE]

    Addressing Poor Performance What Happens if an Employee’s Performance is Below the Meets Expectations (ME) level? Any time during the appraisal period an employee demonstrates that he/she is performing below the ME level in at least one critical element, the Rating Official should contact his/her Human Resources Office for guidance and: •If performance is at the Needs Improvement (NI) level; issue the employee a Performance Assistance Plan (PAP); or •If performance is at the Fails to Meet Expectations (FME) level; issue the employee a Performance Improvement Plan (PIP). Department of Energy Headquarters and The National Treasury Employees Union (NTEU) Collective Bargaining Agreement The National Treasury Employees Union (NTEU) is the exclusive representative of bargaining unit employees at the Department of Energy Headquarters offices in the Washington DC metropolitan area. The terms and conditions of this agreement have been negotiated by DOE and NTEU, and prescribe their respective rights and obligations in matters related to conditions of employment. Headquarters 1187 Request For Payroll Deductions For Labor Organization Dues The Request for Payroll Deduction for Labor Organization Dues (SF-1187) permits eligible employees, who are members of the National Treasury Employees Union (NTEU), to authorize voluntary allotments from their compensation. Headquarters 1188 Cancellation Of Payroll Deductions For Labor Organization Dues The Cancellation of Payroll Deductions for Labor Organizations Dues (SF-1188) permits eligible employees, who are members of the National Treasury Employees Union (NTEU), to cancel dues allotments. The National Treasury Employees Union, Collective Bargaining Agreement, Article 9 – Dues Withholding This article is for the purpose of permitting eligible employees, who are members of the National Treasury Employees Union (NTEU), to authorize voluntary allotments from their compensation.

  1. Radiochemical tracers as a mix diagnostic for the ignition double...

    Office of Scientific and Technical Information (OSTI)

    Springfield, VA at www.ntis.gov. One of the most important challenges confronting laser-driven capsule implosion experiments will be a quantitative evaluation of the...

  2. Radiochemical tracers as a mix diagnostic for the ignition double...

    Office of Scientific and Technical Information (OSTI)

    for the ignition double-shell capsule One of the most important challenges confronting laser-driven capsule implosion experiments will be a quantitative evaluation of the...

  3. Radiochemical Analyses of Water Samples from Selected Streams

    Office of Legacy Management (LM)

    and Precipitation Collected October Conjunction With the First Production Test, Project Rulison-9, HGSlO DISCLAIMER Portions of this document may be illegible in electronic image products. Images are produced from the best available original document. RQTICB ~him+.i$ort w a r n prepared aa an a c c m n t of work .pa%or;il-by the United Stacam C o v e r a n t . ~=itlw-Pthe United Statom nor the United S t a t o ~ ~ t o i ~ ~ h ~ ~ t & y U a m i o l l , nor any of t h e i r ap'lAyea/,/nor any

  4. Radiochemical analysis using Empore{trademark} Rad Disks.

    SciTech Connect

    Smith, L. L.

    1999-06-07

    A solid-phase extraction technique that isolates specific radionuclides (i.e., {sup 89/90}Sr, {sup 226/228}Ra, {sup 99}Tc) from surface, ground, and drinking waters is described. The analyte is isolated by pulling a sample through an appropriate Empore{trademark} Rad Disk with a vacuum, and the disk is subsequently assayed by a suitable counting technique. The method has both laboratory and field applications. Interferences are discussed.

  5. Chemical and Radiochemical Analyses of Waste Isolation Pilot Plant (WIPP)

    Office of Environmental Management (EM)

    Energy Charting the Course for Major EM Successes in 2016-2017 Charting the Course for Major EM Successes in 2016-2017 Presentation from the 2015 DOE National Cleanup Workshop by Stacy Charboneau, Manager, Richland Operations Office. Charting the Course for Major EM Successes in 2016-2017 (1.98 MB) More Documents & Publications Focus on the Field FY 2017 EM Budget Rollout Presentation Construction of Salt Waste Processing Facility (SWPF)

    Cheap Fixes for Beating the Heat Indoors Cheap

  6. Precise trace rare earth analysis by radiochemical neutron activation

    SciTech Connect

    Laul, J.C.; Lepel, E.A.; Weimer, W.C.; Wogman, N.A.

    1981-06-01

    A rare earth group separation scheme followed by normal Ge(Li), low energy photon detector (LEPD), and Ge(Li)-NaI(Tl) coincidence-noncoincidence spectrometry significantly enhances the detection sensitivity of individual rare earth elements (REE) at or below the ppB level. Based on the selected ..gamma..-ray energies, normal Ge(Li) counting is favored for /sup 140/La, /sup 170/Tb, and /sup 169/Yb; LEPD is favored for low ..gamma..-ray energies of /sup 147/Nd, /sup 153/Sm, /sup 166/Ho, and /sup 169/Yb; and noncoincidence counting is favored for /sup 141/Ce, /sup 143/Ce, /sup 142/Pr, /sup 153/Sm, /sup 171/Er, and /sup 175/Yb. The detection of radionuclides /sup 152m/Eu, /sup 159/Gd, and /sup 177/Lu is equally sensitive by normal Ge(Li) and noncoincidence counting; /sup 152/Eu is equally sensitive by LEPD and normal Ge(Li); and /sup 153/Gd and /sup 170/Tm is equally favored by all the counting modes. Overall, noncoincidence counting is favored for most of the REE. Precise measurements of the REE were made in geological and biological standards.

  7. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    SciTech Connect

    Wang, S; Chao, C; Chang, J

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as a detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect

  8. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    SciTech Connect

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  9. Gross error detection and stage efficiency estimation in a separation process

    SciTech Connect

    Serth, R.W.; Srikanth, B. . Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. . Dept. of Chemical and Process Engineering)

    1993-10-01

    Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.

  10. Notes on power of normality tests of error terms in regression models

    SciTech Connect

    Střelec, Luboš

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  11. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  12. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    SciTech Connect

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  13. Stability and error analysis of nodal expansion method for convection-diffusion equation

    SciTech Connect

    Deng, Z.; Rizwan-Uddin; Li, F.; Sun, Y.

    2012-07-01

    The development, and stability and error analyses of nodal expansion method (NEM) for one dimensional steady-state convection diffusion equation is presented. Following the traditional procedure to develop NEM, the discrete formulation of the convection-diffusion equation, which is similar to the standard finite difference scheme, is derived. The method of discrete perturbation analysis is applied to this discrete form to study the stability of the NEM. The scheme based on the NEM is found to be stable for local Peclet number less than 4.644. A maximum principle is proved for the NEM scheme, followed by an error analysis carried out by applying the Maximum principle together with a carefully constructed comparison function. The scheme for the convection diffusion equation is of second-order. Numerical experiments are carried and the results agree with the conclusions of the stability and error analyses. (authors)

  14. Error correcting code with chip kill capability and power saving enhancement

    SciTech Connect

    Gara, Alan G.; Chen, Dong; Coteus, Paul W.; Flynn, William T.; Marcella, James A.; Takken, Todd; Trager, Barry M.; Winograd, Shmuel

    2011-08-30

    A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.

  15. Short-Term Load Forecasting Error Distributions and Implications for Renewable Integration Studies: Preprint

    SciTech Connect

    Hodge, B. M.; Lew, D.; Milligan, M.

    2013-01-01

    Load forecasting in the day-ahead timescale is a critical aspect of power system operations that is used in the unit commitment process. It is also an important factor in renewable energy integration studies, where the combination of load and wind or solar forecasting techniques create the net load uncertainty that must be managed by the economic dispatch process or with suitable reserves. An understanding of that load forecasting errors that may be expected in this process can lead to better decisions about the amount of reserves necessary to compensate errors. In this work, we performed a statistical analysis of the day-ahead (and two-day-ahead) load forecasting errors observed in two independent system operators for a one-year period. Comparisons were made with the normal distribution commonly assumed in power system operation simulations used for renewable power integration studies. Further analysis identified time periods when the load is more likely to be under- or overforecast.

  16. Fade-resistant forward error correction method for free-space optical communications systems

    DOEpatents

    Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.

    2007-10-02

    Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.

  17. Scheme for precise correction of orbit variation caused by dipole error field of insertion device

    SciTech Connect

    Nakatani, T.; Agui, A.; Aoyagi, H.; Matsushita, T.; Takao, M.; Takeuchi, M.; Yoshigoe, A.; Tanaka, H.

    2005-05-15

    We developed a scheme for precisely correcting the orbit variation caused by a dipole error field of an insertion device (ID) in a storage ring and investigated its performance. The key point for achieving the precise correction is to extract the variation of the beam orbit caused by the change of the ID error field from the observed variation. We periodically change parameters such as the gap and phase of the specified ID with a mirror-symmetric pattern over the measurement period to modulate the variation. The orbit variation is measured using conventional wide-frequency-band detectors and then the induced variation is extracted precisely through averaging and filtering procedures. Furthermore, the mirror-symmetric pattern enables us to independently extract the orbit variations caused by a static error field and by a dynamic one, e.g., an error field induced by the dynamical change of the ID gap or phase parameter. We built a time synchronization measurement system with a sampling rate of 100 Hz and applied the scheme to the correction of the orbit variation caused by the error field of an APPLE-2-type undulator installed in the SPring-8 storage ring. The result shows that the developed scheme markedly improves the correction performance and suppresses the orbit variation caused by the ID error field down to the order of submicron. This scheme is applicable not only to the correction of the orbit variation caused by a special ID, the gap or phase of which is periodically changed during an experiment, but also to the correction of the orbit variation caused by a conventional ID which is used with a fixed gap and phase.

  18. Steam-water relative permeability

    SciTech Connect

    Ambusso, W.; Satik, C.; Home, R.N.

    1997-12-31

    A set of relative permeability relations for simultaneous flow of steam and water in porous media have been measured in steady state experiments conducted under the conditions that eliminate most errors associated with saturation and pressure measurements. These relations show that the relative permeabilities for steam-water flow in porous media vary approximately linearly with saturation. This departure from the nitrogen/water behavior indicates that there are fundamental differences between steam/water and nitrogen/water flows. The saturations in these experiments were measured by using a high resolution X-ray computer tomography (CT) scanner. In addition the pressure gradients were obtained from the measurements of liquid phase pressure over the portions with flat saturation profiles. These two aspects constitute a major improvement in the experimental method compared to those used in the past. Comparison of the saturation profiles measured by the X-ray CT scanner during the experiments shows a good agreement with those predicted by numerical simulations. To obtain results that are applicable to general flow of steam and water in porous media similar experiments will be conducted at higher temperature and with porous rocks of different wetting characteristics and porosity distribution.

  19. Low delay and area efficient soft error correction in arbitration logic

    DOEpatents

    Sugawara, Yutaka

    2013-09-10

    There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.

  20. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  1. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  2. runtime error message: "apsched: request exceeds max nodes, alloc"

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    apsched: request exceeds max nodes, alloc" runtime error message: "apsched: request exceeds max nodes, alloc" September 12, 2014 Symptom: User jobs with single or multiple apruns in a batch script may get this runtime error. "apsched: request exceeds max nodes, alloc". This problem is intermittent, started in April, then mid July, and again since late August. Status: This problem is identified as a problem when Torque/Moab batch scheduler becomes out of sync with the

  3. V-228: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Users Execute Arbitrary Code | Department of Energy 8: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote Users Execute Arbitrary Code V-228: RealPlayer Buffer Overflow and Memory Corruption Error Let Remote Users Execute Arbitrary Code August 27, 2013 - 6:00am Addthis PROBLEM: Two vulnerabilities were reported in RealPlayer PLATFORM: RealPlayer 16.0.2.32 and prior ABSTRACT: A remote user can cause arbitrary code to be executed on the target user's system REFERENCE LINKS:

  4. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  5. Deciphering the genetic regulatory code using an inverse error control coding framework.

    SciTech Connect

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  6. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    SciTech Connect

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  7. Superconvergence of the derivative patch recovery technique and a posteriorii error estimation

    SciTech Connect

    Zhang, Z.; Zhu, J.Z.

    1995-12-31

    The derivative patch recovery technique developed by Zienkiewicz and Zhu for the finite element method is analyzed. It is shown that, for one dimensional problems and two dimensional problems using tensor product elements, the patch recovery technique yields superconvergence recovery for the derivatives. Consequently, the error estimator based on the recovered derivative is asymptotically exact.

  8. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  9. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES [OSTI]

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  10. Simulation of Dose to Surrounding Normal Structures in Tangential Breast Radiotherapy Due to Setup Error

    SciTech Connect

    Prabhakar, Ramachandran Rath, Goura K.; Julka, Pramod K.; Ganesh, Tharmar; Haresh, K.P.; Joshi, Rakesh C.; Senthamizhchelvan, S.; Thulkar, Sanjay; Pant, G.S.

    2008-04-01

    Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL), contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.

  11. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES [OSTI]

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors  < ∼ 2°, than those from the three empirical models with averaged errors  > ∼ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  12. The Residual Setup Errors of Different IGRT Alignment Procedures for Head and Neck IMRT and the Resulting Dosimetric Impact

    SciTech Connect

    Graff, Pierre; Radiation-Oncology, Alexis Vautrin Cancer Center, Vandoeuvre-Les-Nancy; Doctoral School BioSE , Nancy ; Kirby, Neil; Weinberg, Vivian; Department of Biostatistics, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California ; Chen, Josephine; Yom, Sue S.; Lambert, Louise; Radiation-Oncology, Montreal University Centre, Montreal ; Pouliot, Jean

    2013-05-01

    Purpose: To assess residual setup errors during head and neck radiation therapy and the resulting consequences for the delivered dose for various patient alignment procedures. Methods and Materials: Megavoltage cone beam computed tomography (MVCBCT) scans from 11 head and neck patients who underwent intensity modulated radiation therapy were used to assess setup errors. Each MVCBCT scan was registered to its reference planning kVCT, with seven different alignment procedures: automatic alignment and manual registration to 6 separate bony landmarks (sphenoid, left/right maxillary sinuses, mandible, cervical 1 [C1]-C2, and C7-thoracic 1 [T1] vertebrae). Shifts in the different alignments were compared with each other to determine whether there were any statistically significant differences. Then, the dose distribution was recalculated on 3 MVCBCT images per patient for every alignment procedure. The resulting dose-volume histograms for targets and organs at risk (OARs) were compared to those from the planning kVCTs. Results: The registration procedures produced statistically significant global differences in patient alignment and actual dose distribution, calling for a need for standardization of patient positioning. Vertically, the automatic, sphenoid, and maxillary sinuses alignments mainly generated posterior shifts and resulted in mean increases in maximal dose to OARs of >3% of the planned dose. The suggested choice of C1-C2 as a reference landmark appears valid, combining both OAR sparing and target coverage. Assuming this choice, relevant margins to apply around volumes of interest at the time of planning to take into account for the relative mobility of other regions are discussed. Conclusions: Use of different alignment procedures for treating head and neck patients produced variations in patient setup and dose distribution. With concern for standardizing practice, C1-C2 reference alignment with relevant margins around planning volumes seems to be a valid

  13. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  14. Application of ISO-TAG4 to the reporting of limit of error on the inventory difference

    SciTech Connect

    Murdock, C.; Suda, S.

    1993-07-01

    A standard reference does not exist for evaluating and expressing systematic and random uncertainty, thus, there is no basis for comparing measurement uncertainties at different facilities. Based on recommendations of the International Committee for Weights and Measures, the National Center for Standards and Certification Information, which is responsible for information on standardization programs and related activities, has published ISO-TAG4, Guide to the Expression of Uncertainty in Measurement (1993). The guide establishes general rules for evaluating and expressing uncertainty in physical measurements by presenting definitions, basic concepts and examples. it focuses on the methods of evaluating uncertainty components rather than categorizing the components, thus avoiding the ambiguity encountered when categorizing uncertainty components as ``random`` and ``systematic.`` This paper presents an overview of the terms specific to the guide, including standard and combined standard uncertainty, Type A and Type B evaluation, expanded uncertainty, and coverage factor. It illustrates Type A and Type B evaluation of random and systematic errors in forms relating to nuclear material accountability work. This guide could be adapted by the MC&A community.

  15. Numerical estimation of the relative entropy of entanglement

    SciTech Connect

    Zinchenko, Yuriy; Friedland, Shmuel; Gour, Gilad

    2010-11-15

    We propose a practical algorithm for the calculation of the relative entropy of entanglement (REE), defined as the minimum relative entropy between a state and the set of states with positive partial transpose. Our algorithm is based on a practical semidefinite cutting plane approach. In low dimensions the implementation of the algorithm in matlab provides an estimation for the REE with an absolute error smaller than 10{sup -3}.

  16. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  17. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES [OSTI]

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  18. Density-functional errors in ionization potential with increasing system size

    SciTech Connect

    Whittleton, Sarah R.; Sosa Vazquez, Xochitl A.; Isborn, Christine M.; Johnson, Erin R.

    2015-05-14

    This work investigates the effects of molecular size on the accuracy of density-functional ionization potentials for a set of 28 hydrocarbons, including series of alkanes, alkenes, and oligoacenes. As the system size increases, delocalization error introduces a systematic underestimation of the ionization potential, which is rationalized by considering the fractional-charge behavior of the electronic energies. The computation of the ionization potential with many density-functional approximations is not size-extensive due to excessive delocalization of the incipient positive charge. While inclusion of exact exchange reduces the observed errors, system-specific tuning of long-range corrected functionals does not generally improve accuracy. These results emphasize that good performance of a functional for small molecules is not necessarily transferable to larger systems.

  19. Effect of Field Errors in Muon Collider IR Magnets on Beam Dynamics

    SciTech Connect

    Alexahin, Y.; Gianfelice-Wendt, E.; Kapin, V.V.; /Fermilab

    2012-05-01

    In order to achieve peak luminosity of a Muon Collider (MC) in the 10{sup 35} cm{sup -2}s{sup -1} range very small values of beta-function at the interaction point (IP) are necessary ({beta}* {le} 1 cm) while the distance from IP to the first quadrupole can not be made shorter than {approx}6 m as dictated by the necessity of detector protection from backgrounds. In the result the beta-function at the final focus quadrupoles can reach 100 km making beam dynamics very sensitive to all kind of errors. In the present report we consider the effects on momentum acceptance and dynamic aperture of multipole field errors in the body of IR dipoles as well as of fringe-fields in both dipoles and quadrupoles in the ase of 1.5 TeV (c.o.m.) MC. Analysis shows these effects to be strong but correctable with dedicated multipole correctors.

  20. Calculation of the Johann error for spherically bent x-ray imaging crystal spectrometers

    SciTech Connect

    Wang, E.; Beiersdorfer, P.; Gu, M.; Bitter, M.; Delgado-Aparicio, L.; Hill, K. W.; Reinke, M.; Rice, J. E.; Podpaly, Y.

    2010-10-15

    New x-ray imaging crystal spectrometers, currently operating on Alcator C-Mod, NSTX, EAST, and KSTAR, record spectral lines of highly charged ions, such as Ar{sup 16+}, from multiple sightlines to obtain profiles of ion temperature and of toroidal plasma rotation velocity from Doppler measurements. In the present work, we describe a new data analysis routine, which accounts for the specific geometry of the sightlines of a curved-crystal spectrometer and includes corrections for the Johann error to facilitate the tomographic inversion. Such corrections are important to distinguish velocity induced Doppler shifts from instrumental line shifts caused by the Johann error. The importance of this correction is demonstrated using data from Alcator C-Mod.

  1. Communication: Fixed-node errors in quantum Monte Carlo: Interplay of electron density and node nonlinearities

    SciTech Connect

    Rasch, Kevin M.; Hu, Shuming; Mitas, Lubos

    2014-01-28

    We elucidate the origin of large differences (two-fold or more) in the fixed-node errors between the first- vs second-row systems for single-configuration trial wave functions in quantum Monte Carlo calculations. This significant difference in the valence fixed-node biases is studied across a set of atoms, molecules, and also Si, C solid crystals. We show that the key features which affect the fixed-node errors are the differences in electron density and the degree of node nonlinearity. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems, provide new perspectives on the origins of the fixed-node biases in calculations of molecular and condensed systems, and carry implications for pseudopotential constructions for heavy elements.

  2. Reduction of the pulse spike-cut error in Fourier-deconvolved lidar profiles

    SciTech Connect

    Stoyanov, D.V.; Gurdev, L.L.; Dreischuh, T.N.

    1996-08-01

    A simple approach is analyzed and applied to the National Oceanic and Atmospheric Administration (NOAA) Doppler lidar data to reduce the error in Fourier-deconvolved lidar profiles that is caused by spike-cut uncertainty in the laser pulse shape, i.e., uncertainty of the type of not well-recorded (cut, missed) pulse spikes. Such a type of uncertainty is intrinsic to the case of TE (TEA) CO{sub 2} laser transmitters. This approach requires only an estimate of the spike area to be known. The result from the analytical estimation of error reduction is in agreement with the results from the NOAA lidar data processing and from computer simulation. {copyright} {ital 1996 Optical Society of America.}

  3. Solar neutrino experiments: An update

    SciTech Connect

    Hahn, R.L.

    1993-12-31

    The situation in solar neutrino physics has changed drastically in the past few years, so that now there are four neutrino experiments in operation, using different methods to look at different regions of the solar neutrino energy spectrum. These experiments are the radiochemical {sup 37}Cl Homestake detector, the realtime Kamiokande detector, and the different forms of radiochemical {sup 71}Ga detectors used in the GALLEX and SAGE projects. It is noteworthy that all of these experiments report a deficit of observed neutrinos relative to the predictions of standard solar models (although in the case of the gallium detectors, the statistical errors are still relatively large). This paper reviews the basic principles of operation of these neutrino detectors, reports their latest results and discusses some theoretical interpretations. The progress of three realtime neutrino detectors that are currently under construction, SuperKamiok, SNO and Borexino, is also discussed.

  4. Techniques for reducing error in the calorimetric measurement of low wattage items

    SciTech Connect

    Sedlacek, W.A.; Hildner, S.S.; Camp, K.L.; Cremers, T.L.

    1993-08-01

    The increased need for the measurement of low wattage items with production calorimeters has required the development of techniques to maximize the precision and accuracy of the calorimeter measurements. An error model for calorimetry measurements is presented. This model is used as a basis for optimizing calorimetry measurements through baseline interpolation. The method was applied to the heat measurement of over 100 items and the results compared to chemistry assay and mass spectroscopy.

  5. Accuracy of the European solar water heater test procedure. Part 1: Measurement errors and parameter estimates

    SciTech Connect

    Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.

    1991-01-01

    The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.

  6. Energy Conservation Program: Establishment of Procedures for Requests for Correction of Errors in Rules

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    establishment of procedures for requests for correction of errors in rules is an action issued by the Department of Energy. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document. 6450-01-P DEPARTMENT OF ENERGY 10 CFR Parts 430 and

  7. MULTI-MODE ERROR FIELD CORRECTION ON THE DIII-D TOKAMAK

    SciTech Connect

    SCOVILLE, JT; LAHAYE, RJ

    2002-10-01

    OAK A271 MULTI-MODE ERROR FIELD CORRECTION ON THE DIII-D TOKAMAK. Error field optimization on DIII-D tokamak plasma discharges has routinely been done for the last ten years with the use of the external ''n = 1 coil'' or the ''C-coil''. The optimum level of correction coil current is determined by the ability to avoid the locked mode instability and access previously unstable parameter space at low densities. The locked mode typically has toroidal and poloidal mode numbers n = 1 and m = 2, respectively, and it is this component that initially determined the correction coil current and phase. Realization of the importance of nearby n = 1 mode components m = 1 and m = 3 has led to a revision of the error field correction algorithm. Viscous and toroidal mode coupling effects suggested the need for additional terms in the expression for the radial ''penetration'' field B{sub pen} that can induce a locked mode. To incorporate these effects, the low density locked mode threshold database was expanded. A database of discharges at various toroidal fields, plasma currents, and safety factors was supplement4ed with data from an experiment in which the fields of the n = 1 coil and C-coil were combined, allowing the poloidal mode spectrum of the error field to be varied. A multivariate regression analysis of this new low density locked mode database was done to determine the low density locked mode threshold scaling relationship n{sub e} {proportional_to} B{sub T}{sup -0.01} q{sub 95}{sup -0.79} B{sub pen} and the coefficients of the poloidal mode components in the expression for B{sub pen}. Improved plasma performance is achieved by optimizing B{sub pen} by varying the applied correction coil currents.

  8. U-039: ISC Update: BIND 9 Resolver crashes after logging an error in query.c

    Energy.gov [DOE]

    A remote server can cause the target connected client to crash. Organizations across the Internet are reporting crashes interrupting service on BIND 9 nameservers performing recursive queries. Affected servers crash after logging an error in query.c with the following message: "INSIST(! dns_rdataset_isassociated(sigrdataset))" Multiple versions are reported as being affected, including all currently supported release versions of ISC BIND 9. ISC is actively investigating the root cause and working to produce patches which avoid the crash.

  9. U-038: BIND 9 Resolver crashes after logging an error in query.c

    Energy.gov [DOE]

    A remote server can cause the target connected client to crash. Organizations across the Internet are reporting crashes interrupting service on BIND 9 nameservers performing recursive queries. Affected servers crash after logging an error in query.c with the following message: "INSIST(! dns_rdataset_isassociated(sigrdataset))" Multiple versions are reported as being affected, including all currently supported release versions of ISC BIND 9. ISC is actively investigating the root cause and working to produce patches which avoid the crash.

  10. Mitigating the Effect of Latency Errors Between Remote HIL Systems - Energy

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Innovation Portal Mitigating the Effect of Latency Errors Between Remote HIL Systems National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary Several research institutions are pursuing virtually connected, large-scale energy systems integration testbeds through the use of remote hardware-in-the-loop (HIL) techniques. This is driven by the ability to share laboratory resources that are physically separated (often over large geographical distances)

  11. Investigating the Correlation Between Wind and Solar Power Forecast Errors in the Western Interconnection: Preprint

    SciTech Connect

    Zhang, J.; Hodge, B. M.; Florita, A.

    2013-05-01

    Wind and solar power generations differ from conventional energy generation because of the variable and uncertain nature of their power output. This variability and uncertainty can have significant impacts on grid operations. Thus, short-term forecasting of wind and solar generation is uniquely helpful for power system operations to balance supply and demand in an electricity system. This paper investigates the correlation between wind and solar power forecasting errors.

  12. Types of Possible Survey Errors in Estimates Published in the Weekly

    Weekly Natural Gas Storage Report

    Natural Gas Storage Report Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report Release date: March 1, 2016 The U.S. Energy Information Administration (EIA) collects and publishes natural gas storage information on a monthly and weekly basis. The Form EIA-191, Monthly Underground Natural Gas Storage Report, is a census survey that collects field-level information from all underground natural gas storage operators in the United States known to EIA.

  13. Analytical Tests for Ray Effect Errors in Discrete Ordinate Methods for Solving the Neutron Transport Equation

    SciTech Connect

    Chang, B

    2004-03-22

    This paper contains three analytical solutions of transport problems which can be used to test ray-effect errors in the numerical solutions of the Boltzmann Transport Equation (BTE). We derived the first two solutions and the third was shown to us by M. Prasad. Since this paper is intended to be an internal LLNL report, no attempt was made to find the original derivations of the solutions in the literature in order to cite the authors for their work.

  14. Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Mesurements

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Analysis of Cloud Variability and Sampling Errors in Surface and Satellite Measurements Z. Li, M. C. Cribb, and F.-L. Chang Earth System Science Interdisciplinary Center University of Maryland College Park, Maryland A. P. Trishchenko and Y. Luo Canada Centre for Remote Sensing Ottawa, Ontario, Canada Introduction Radiation measurements have been widely employed for evaluating cloud parameterization schemes and model simulation results. As the most comprehensive program aiming to improve cloud

  15. Community Relations Plan

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Environmental Stewardship Environmental Protection Community Relations Plan Community Relations Plan Consultations, communications, agreements, and disagreements...

  16. An effective approach for the minimization of errors in capacitance-voltage carrier profiling of quantum structures

    SciTech Connect

    Biswas, Dipankar Panda, Siddhartha

    2014-04-07

    Experimental capacitance–voltage (C-V) profiling of semiconductor heterojunctions and quantum wells has remained ever important and relevant. The apparent carrier distributions (ACDs) thus obtained reveal the carrier depletions, carrier peaks and their positions, in and around the quantum structures. Inevitable errors, encountered in such measurements, are the deviations of the peak concentrations of the ACDs and their positions, from the actual carrier peaks obtained from quantum mechanical computations with the fundamental parameters. In spite of the very wide use of the C-V method, comprehensive discussions on the qualitative and quantitative nature of the errors remain wanting. The errors are dependent on the fundamental parameters, the temperature of measurements, the Debye length, and the series resistance. In this paper, the errors have been studied with doping concentration, band offset, and temperature. From this study, a rough estimate may be drawn about the error. It is seen that the error in the position of the ACD peak decreases at higher doping, higher band offset, and lower temperature, whereas the error in the peak concentration changes in a strange fashion. A completely new method is introduced, for derivation of the carrier profiles from C-V measurements on quantum structures to minimize errors which are inevitable in the conventional formulation.

  17. An Optimized Autoregressive Forecast Error Generator for Wind and Load Uncertainty Study

    SciTech Connect

    De Mello, Phillip; Lu, Ning; Makarov, Yuri V.

    2011-01-17

    This paper presents a first-order autoregressive algorithm to generate real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast errors. The methodology aims at producing random wind and load forecast time series reflecting the autocorrelation and cross-correlation of historical forecast data sets. Five statistical characteristics are considered: the means, standard deviations, autocorrelations, and cross-correlations. A stochastic optimization routine is developed to minimize the differences between the statistical characteristics of the generated time series and the targeted ones. An optimal set of parameters are obtained and used to produce the RT, HA, and DA forecasts in due order of succession. This method, although implemented as the first-order regressive random forecast error generator, can be extended to higher-order. Results show that the methodology produces random series with desired statistics derived from real data sets provided by the California Independent System Operator (CAISO). The wind and load forecast error generator is currently used in wind integration studies to generate wind and load inputs for stochastic planning processes. Our future studies will focus on reflecting the diurnal and seasonal differences of the wind and load statistics and implementing them in the random forecast generator.

  18. Uncoupling nicotine mediated motoneuron axonal pathfinding errors and muscle degeneration in zebrafish

    SciTech Connect

    Welsh, Lillian; Tanguay, Robert L.; Svoboda, Kurt R.

    2009-05-15

    Zebrafish embryos offer a unique opportunity to investigate the mechanisms by which nicotine exposure impacts early vertebrate development. Embryos exposed to nicotine become functionally paralyzed by 42 hpf suggesting that the neuromuscular system is compromised in exposed embryos. We previously demonstrated that secondary spinal motoneurons in nicotine-exposed embryos were delayed in development and that their axons made pathfinding errors (Svoboda, K.R., Vijayaraghaven, S., Tanguay, R.L., 2002. Nicotinic receptors mediate changes in spinal motoneuron development and axonal pathfinding in embryonic zebrafish exposed to nicotine. J. Neurosci. 22, 10731-10741). In that study, we did not consider the potential role that altered skeletal muscle development caused by nicotine exposure could play in contributing to the errors in spinal motoneuron axon pathfinding. In this study, we show that an alteration in skeletal muscle development occurs in tandem with alterations in spinal motoneuron development upon exposure to nicotine. The alteration in the muscle involves the binding of nicotine to the muscle-specific AChRs. The nicotine-induced alteration in muscle development does not occur in the zebrafish mutant (sofa potato, [sop]), which lacks muscle-specific AChRs. Even though muscle development is unaffected by nicotine exposure in sop mutants, motoneuron axonal pathfinding errors still occur in these mutants, indicating a direct effect of nicotine exposure on nervous system development.

  19. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    SciTech Connect

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  20. A method for the quantification of model form error associated with physical systems.

    SciTech Connect

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  1. Precise method of compensating radiation-induced errors in a hot-cathode-ionization gauge with correcting electrode

    SciTech Connect

    Saeki, Hiroshi Magome, Tamotsu

    2014-10-06

    To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method was approximately less than several percent in the pressure range from 10{sup ?5} Pa to 10{sup ?8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.

  2. A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages

    DOE PAGES [OSTI]

    Xu, W.; Lauer, K.; Chu, Y.; Nazaretski, E.

    2014-11-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  3. Revenue metering error caused by induced voltage from adjacent transmission lines

    SciTech Connect

    Hughes, M.B. )

    1992-04-01

    A large zero sequence voltage was found to have been induced onto a 138 kV line from adjacent 500 kV lines where these share the same transmission right-of-way. This zero sequence voltage distorted the 2-1/2-element revenue metering schemes used for two large industrial customer supplied directly from the affected 138 kV line. As a result, these two customers were overcharged, on average, approximately 3.5% for 15 years. This paper describes the work done to trace the origins of the zero sequence voltage, quantify the metering error, and calculate customer refunds which, in the end, totalled $4 million.

  4. SU-E-T-170: Evaluation of Rotational Errors in Proton Therapy Planning of Lung Cancer

    SciTech Connect

    Rana, S; Zhao, L; Ramirez, E; Singh, H; Zheng, Y

    2014-06-01

    Purpose: To investigate the impact of rotational (roll, yaw, and pitch) errors in proton therapy planning of lung cancer. Methods: A lung cancer case treated at our center was used in this retrospective study. The original plan was generated using two proton fields (posterior-anterior and left-lateral) with XiO treatment planning system (TPS) and delivered using uniform scanning proton therapy system. First, the computed tomography (CT) set of original lung treatment plan was re-sampled for rotational (roll, yaw, and pitch) angles ranged from ?5 to +5, with an increment of 2.5. Second, 12 new proton plans were generated in XiO using the 12 re-sampled CT datasets. The same beam conditions, isocenter, and devices were used in new treatment plans as in the original plan. All 12 new proton plans were compared with original plan for planning target volume (PTV) coverage and maximum dose to spinal cord (cord Dmax). Results: PTV coverage was reduced in all 12 new proton plans when compared to that of original plan. Specifically, PTV coverage was reduced by 0.03% to 1.22% for roll, by 0.05% to 1.14% for yaw, and by 0.10% to 3.22% for pitch errors. In comparison to original plan, the cord Dmax in new proton plans was reduced by 8.21% to 25.81% for +2.5 to +5 pitch, by 5.28% to 20.71% for +2.5 to +5 yaw, and by 5.28% to 14.47% for ?2.5 to ?5 roll. In contrast, cord Dmax was increased by 3.80% to 3.86% for ?2.5 to ?5 pitch, by 0.63% to 3.25% for ?2.5 to ?5 yaw, and by 3.75% to 4.54% for +2.5 to +5 roll. Conclusion: PTV coverage was reduced by up to 3.22% for rotational error of 5. The cord Dmax could increase or decrease depending on the direction of rotational error, beam angles, and the location of lung tumor.

  5. Error correction for vertical surveys conducted over a subsiding longwall mining panel

    SciTech Connect

    Hughes, A.

    1996-12-31

    The difference between a conventional land survey and a survey of subsiding ground is discussed and a correction method was formulated for surveys conducted on subsiding ground. The area over the longwall mining panel subsided detectable amounts during the time required to conduct the survey when subsidence was at its highest rate, which introduces error into the survey. When the ground subsides before the survey is completed, the survey no longer represents the locations of all points at a common point in time, which is a basic assumption of conventional land surveying. Conventional methods of correction average movement of subsiding points and apply those amounts of movement to points which were unaffected by subsidence, a different correction method was needed. A correction method was used which uses multiple surveys to calculate rates of subsidence for each point in the survey. Subsidence rates were used to estimate the location of each point at a common time, Results are presented using the correction for subsiding ground and using no correction. Different results of the same surveys are shown in terms of elevations and curvatures. The significance of the different types of corrections is discussed and the compounding of error is demonstrated when calculating curvatures.

  6. Accurate description of torsion potentials in conjugated polymers using density functionals with reduced self-interaction error

    SciTech Connect

    Sutton, Christopher; Gray, Matthew T.; Brunsfeld, Max; Parrish, Robert M.; Sherrill, C. David; Sears, John S.; Brédas, Jean-Luc E-mail: thomas.koerzdoerfer@uni-potsdam.de; Körzdörfer, Thomas E-mail: thomas.koerzdoerfer@uni-potsdam.de; Computational Chemistry, Institute of Chemistry, University of Potsdam, D-14476 Potsdam

    2014-02-07

    We investigate the torsion potentials in two prototypical π-conjugated polymers, polyacetylene and polydiacetylene, as a function of chain length using different flavors of density functional theory. Our study provides a quantitative analysis of the delocalization error in standard semilocal and hybrid density functionals and demonstrates how it can influence structural and thermodynamic properties. The delocalization error is quantified by evaluating the many-electron self-interaction error (MESIE) for fractional electron numbers, which allows us to establish a direct connection between the MESIE and the error in the torsion barriers. The use of non-empirically tuned long-range corrected hybrid functionals results in a very significant reduction of the MESIE and leads to an improved description of torsion barrier heights. In addition, we demonstrate how our analysis allows the determination of the effective conjugation length in polyacetylene and polydiacetylene chains.

  7. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    SciTech Connect

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors.

    To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s−1) and errors in the vertical velocity measurement

  8. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE PAGES [OSTI]

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s−1) and errors in the vertical velocity measurement exceed the actual

  9. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    SciTech Connect

    Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  10. A numerical study of geometry dependent errors in velocity, temperature, and density measurements from single grid planar retarding potential analyzers

    SciTech Connect

    Davidson, R. L.; Earle, G. D.; Heelis, R. A.; Klenzing, J. H.

    2010-08-15

    Planar retarding potential analyzers (RPAs) have been utilized numerous times on high profile missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellite Program to measure plasma composition, temperature, density, and the velocity component perpendicular to the plane of the instrument aperture. These instruments use biased grids to approximate ideal biased planes. These grids introduce perturbations in the electric potential distribution inside the instrument and when unaccounted for cause errors in the measured plasma parameters. Traditionally, the grids utilized in RPAs have been made of fine wires woven into a mesh. Previous studies on the errors caused by grids in RPAs have approximated woven grids with a truly flat grid. Using a commercial ion optics software package, errors in inferred parameters caused by both woven and flat grids are examined. A flat grid geometry shows the smallest temperature and density errors, while the double thick flat grid displays minimal errors for velocities over the temperature and velocity range used. Wire thickness along the dominant flow direction is found to be a critical design parameter in regard to errors in all three inferred plasma parameters. The results shown for each case provide valuable design guidelines for future RPA development.

  11. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    SciTech Connect

    Zhang, Jing Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  12. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    DOEpatents

    Molley, Perry A.

    1991-01-01

    An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.

  13. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    SciTech Connect

    Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie; Wei, Yuquan

    2013-07-01

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.

  14. SU-E-P-36: Evaluation of MLC Positioning Errors in Dynamic IMRT Treatments by Analyzing Dynalog Files

    SciTech Connect

    Olasolo, J; Pellejero, S; Gracia, M; Gallardo, N; Martin, M; Lozares, S; Maneru, F; Bragado, L; Miquelez, S; Rubio, A

    2015-06-15

    Purpose: To assess the accuracy of MLC positioning in Varian linear accelerator, in dynamic IMRT technique, from the analysis of dynalog files generated by the MLC controller. Methods: In Clinac accelerators (pre-TrueBeam technology), control system has an approximately 50ms delay (one control cycle time). Then, the system compares the measured position to the planned position corresponding to the next control cycle. As it has been confirmed by Varian technical support, this effect causes that measured positions appear in dynalogs one cycle out of phase with respect to the planned positions. Around 9000 dynalogs have been analyzed, coming from the three linear accelerators of our center (one Trilogy and two Clinac 21EX) equipped with a Millennium 120 MLC. In order to compare our results to recent publications, leaf positioning errors (RMS and 95th percentile) are calculated with and without delay effect. Dynalogs have been analyzed using a in-house Matlab software. Results: The RMS errors were 0.341, 0.339 and 0.348mm for each Linac; being the average error 0.343 mm. The 95th percentiles of the error were 0.617, 0.607 and 0.625; with an average of 0.617mm. A recent multi-institution study carried out by Kerns et al. found a mean leaf RMS error of 0.32mm and a 95th percentile error value of 0.64mm.Without delay effect, mean leaf RMS errors obtained were 0.040, 0.042 and 0.038mm for each treatment machine; being the average 0.040mm. The 95th percentile error values obtained were 0.057, 0.058 and 0.054 mm, with an average of 0.056mm. Conclusion: Results obtained for the mean leaf RMS error and the mean 95th percentile were consistent with the multi-institution study. Calculated error statistics with delay effect are significantly larger due to the speed proportional and systematic leaf offset. Consequently it is proposed to correct this effect in dynalogs analysis to determine the MLC performance.

  15. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    SciTech Connect

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  16. Error Assessment of Homogenized Cross Sections Generation for Whole Core Neutronic Calculation

    SciTech Connect

    Hursin, Mathieu; Kochunas, Brendan; Downar, Thomas J.

    2007-10-26

    The objective of the work here was to assess the errors introduced by using 2D, few group homogenized cross sections to perform neutronic analysis of BWR problems with significant axial heterogeneities. The 3D method of characteristics code DeCART is used to generate 2-group assembly homogenized cross sections first using a conventional 2D lattice model and then using a full 3D solution of the assembly. A single BWR fuel assembly model based on an advanced BWR lattice design is used with a typical void distribution applied to the fuel channel coolant. This model is validated against an MCNP model. A comparison of the cross sections is performed for the assembly homogenized planar cross sections from the DeCART 3D and DeCART 2D solutions.

  17. Inducible error-prone repair in B. subtilis. Final report, September 1, 1979-June 30, 1981

    SciTech Connect

    Yasbin, R. E.

    1981-06-01

    The research performed under this contract has been concentrated on the relationship between inducible DNA repair systems, mutagenesis and the competent state in the gram positive bacterium Bacillus subtilis. The following results have been obtained from this research: (1) competent Bacillus subtilis cells have been developed into a sensitive tester system for carcinogens; (2) competent B. subtilis cells have an efficient excision-repair system, however, this system will not function on bacteriophage DNA taken into the cell via the process of transfection; (3) DNA polymerase III is essential in the mechanism of the process of W-reactivation; (4) B. subtilis strains cured of their defective prophages have been isolated and are now being developed for gene cloning systems; (5) protoplasts of B. subtilis have been shown capable of acquiring DNA repair enzymes (i.e., enzyme therapy); and (6) a plasmid was characterized which enhanced inducible error-prone repair in a gram positive organism.

  18. Geomechanical Analysis with Rigorous Error Estimates for a Double-Porosity Reservoir Model

    SciTech Connect

    Berryman, J G

    2005-04-11

    A model of random polycrystals of porous laminates is introduced to provide a means for studying geomechanical properties of double-porosity reservoirs. Calculations on the resulting earth reservoir model can proceed semi-analytically for studies of either the poroelastic or transport coefficients. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden (or presumed unknown) microstructure on the final results can then be evaluated quantitatively. Detailed descriptions of the use of the model and some numerical examples showing typical results for the double-porosity poroelastic coefficients of a heterogeneous reservoir are presented.

  19. Up-scaling analysis with rigorous error estimates for poromechanics in random polycrystals of porous laminates

    SciTech Connect

    Berryman, J G

    2005-01-03

    A detailed analytical model of random polycrystals of porous laminates has been developed. This approach permits detailed calculations of poromechanics constants as well as transport coefficients. The resulting earth reservoir model allows studies of both geomechanics and fluid permeability to proceed semi-analytically. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden or unknown microstructure on the final results can then be evaluated quantitatively. Descriptions of the use of the model and some examples of typical results on the poromechanics of such a heterogeneous reservoir are presented.

  20. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    SciTech Connect

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy Michael

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  1. Bound on quantum computation time: Quantum error correction in a critical environment

    SciTech Connect

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-08-15

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  2. The Higgs transverse momentum distribution at NNLL and its theoretical errors

    DOE PAGES [OSTI]

    Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun

    2015-12-15

    In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p⊥ << mh we include the resummation of the large logs at next to next-to leading order and then match on to the α2s fixed order result near p⊥~mh. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our central value differs from previous results, in themore » transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less

  3. Detecting Translation Errors in CAD Surfaces and Preparing Geometries for Mesh Generation

    SciTech Connect

    Petersson, N Anders; Chand, K K

    2001-08-27

    The authors have developed tools for the efficient preparation of CAD geometries for mesh generation. Geometries are read from IGES files and then maintained in a boundary-representation consisting of a patchwork of trimmed and untrimmed surfaces. Gross errors in the geometry can be identified and removed automatically while a user interface is provided for manipulating the geometry (such as correcting invalid trimming curves or removing unwanted details). Modifying the geometry by adding or deleting surfaces and/or sectioning it by arbitrary planes (e.g. symmetry planes) is also supported. These tools are used for robust and accurate geometry models for initial mesh generation and will be applied to in situ mesh generation requirements of moving and adaptive grid simulations.

  4. Post-event human decision errors: operator action tree/time reliability correlation

    SciTech Connect

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  5. Relative entropy equals bulk relative entropy

    DOE PAGES [OSTI]

    Jafferis, Daniel L.; Lewkowycz, Aitor; Maldacena, Juan; Suh, S. Josephine

    2016-06-01

    We consider the gravity dual of the modular Hamiltonian associated to a general subregion of a boundary theory. We use it to argue that the relative entropy of nearby states is given by the relative entropy in the bulk, to leading order in the bulk gravitational coupling. We also argue that the boundary modular flow is dual to the bulk modular flow in the entanglement wedge, with implications for entanglement wedge reconstruction.

  6. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  7. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES [OSTI]

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  8. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES [OSTI]

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  9. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    SciTech Connect

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  10. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    SciTech Connect

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potential for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.

  11. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    SciTech Connect

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potential for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.

  12. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Srsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  13. EMPLOYMENT OF RELATIVES (NEPOTISM)

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    EMPLOYMENT OF RELATIVES (NEPOTISM) An applicant who is a relative of an employee of Oak Ridge Associated Universities (ORAU) will be considered for employment on the same basis as other candidates. However, applicants are obligated to inform the Employment Department of relatives who are ORAU employees. ORAU's nepotism policy places the following restrictions on employment of relatives: * An employee may not have a managerial or administrative relationship over a relative (this prohibition

  14. A class of error estimators based on interpolating the finite element solutions for reaction-diffusion equations

    SciTech Connect

    Lin, T.; Wang, H.

    1995-12-31

    The swift improvement of computational capabilities enables us to apply finite element methods to simulate more and more problems arising from various applications. A fundamental question associated with finite element simulations is their accuracy. In other words, before we can make any decisions based on the numerical solutions, we must be sure that they are acceptable in the sense that their errors are within the given tolerances. Various estimators have been developed to assess the accuracy of finite element solutions, and they can be classified basically into two types: a priori error estimates and a posteriori error estimates. While a priori error estimates can give us asymptotic convergence rates of numerical solutions in terms of the grid size before the computations, they depend on certain Sobolev norms of the true solutions which are not known, in general. Therefore, it is difficult, if not impossible, to use a priori estimates directly to decide whether a numerical solution is acceptable or a finer partition (and so a new numerical solution) is needed. In contrast, a posteriori error estimates depends only on the numerical solutions, and they usually give computable quantities about the accuracy of the numerical solutions.

  15. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    SciTech Connect

    Zanolin, M.; Vitale, S.; Makris, N.

    2010-06-15

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

  16. Feasibility exploration of throughfold as a predictor for target loading and associated error bounds. Master`s thesis

    SciTech Connect

    Rongone, K.G.

    1994-12-01

    Various applications of the Fredholm integral equation appear in different fields of study. An application of particular interest to the Air Force arises in determination of target loading from nuclear effects simulations. Current techniques first unfold the incident spectrum then determine target loading; resulting spectrum and loading are assumed exact. This study investigates the feasibility of a new method, through-fold, for directly determining defensible error bounds on target loading. Through-fold uses a priori information to define input data and represents target response with a linear combination of instrument responses plus a remainder to derive a quadratic expression for exact target loading. This study uses a simplified, linear version of the quadratic expression. Through-fold feasibility is tested by comparing error bounds based on three target loading functions. The three test cases include an exact linear combination of instrument responses, the same combination plus a positive remainder, and the same combination plus a negative remainder. Total error bounds reduced from 100% to 35% in cases number l and number 2. In case number 3 error bound was reduced to 48%. These results indicate that through-fold has promise as a predictor of error bounds on target loading.

  17. Preliminary Notice of Violation, UT-Battelle, LLC- EA-2003-10

    Energy.gov [DOE]

    Issued to UT-Battelle, LLC, related to Work Control Issues at the High Flux Isotope Reactor and Radiochemical Engineering Development Center at Oak Ridge National Laboratory

  18. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  19. The crossing statistic: dealing with unknown errors in the dispersion of Type Ia supernovae

    SciTech Connect

    Shafieloo, Arman; Clifton, Timothy; Ferreira, Pedro E-mail: tclifton@astro.ox.ac.uk

    2011-08-01

    We propose a new statistic that has been designed to be used in situations where the intrinsic dispersion of a data set is not well known: The Crossing Statistic. This statistic is in general less sensitive than χ{sup 2} to the intrinsic dispersion of the data, and hence allows us to make progress in distinguishing between different models using goodness of fit to the data even when the errors involved are poorly understood. The proposed statistic makes use of the shape and trends of a model's predictions in a quantifiable manner. It is applicable to a variety of circumstances, although we consider it to be especially well suited to the task of distinguishing between different cosmological models using type Ia supernovae. We show that this statistic can easily distinguish between different models in cases where the χ{sup 2} statistic fails. We also show that the last mode of the Crossing Statistic is identical to χ{sup 2}, so that it can be considered as a generalization of χ{sup 2}.

  20. Using Ancillary Information to Reduce Sample Size in Discovery Sampling and the Effects of Measurement Error

    SciTech Connect

    Axelrod, M

    2005-08-18

    big sample size might bust the budget, or the number seems intuitively excessive. To reduce the sample size, you can increase the tolerable number of defectives, the ''10'' in the preceding example, or back off on the confidence level, say from 95% to 90%. Auditors also frequently bump up the sample size as a safety factor. They know that something can go wrong. For example, they might find out that the measurements or inspections were subject to errors. Unless the auditors know exactly how measurement error affects sample size, they might be forced to give up the safety factor. Clients often choose to ''live dangerously'' (without a compelling argument to the contrary) to save money. Thus, sometimes the auditor finds that ''you just can't get there from here'', because the goals of the audit and the resources available are inherently in conflict. For discovery audits, there is a way out of this apparent conundrum. It turns out that the classical method of confidence intervals uses an implicit and very conservative assumption. We will see that this assumption is too pessimistic and too conservative in the context of a discovery audit. If we abandon this assumption and use ancillary information about the inventory, then we can significantly reduce the sample size required to achieve the desired confidence level. We will see exactly how the classical method ignores this ancillary information and misses the opportunity for an efficient audit. In the following sections, we first review the standard approach using confidence intervals. Then we present a method that incorporates the ancillary information about the inventory to design a very efficient discovery audit. We also provide results on how measurement errors affect the audit, and how exactly how much the sample size must be modified to compensate for these errors. Finally, we state asymptotic formulas that provide useful approximations for large inventories. It is suggested that the reader review the glossary of

  1. Improved Characterization of Transmitted Wavefront Error on CADB Epoxy-Free Bonded Solid State Laser Materials

    SciTech Connect

    Bayramian, A

    2010-12-09

    Current state-of-the-art and next generation laser systems - such as those used in the NIF and LIFE experiments at LLNL - depend on ever larger optical elements. The need for wide aperture optics that are tolerant of high power has placed many demands on material growers for such diverse materials as crystalline sapphire, quartz, and laser host materials. For such materials, it is either prohibitively expensive or even physically impossible to fabricate monolithic pieces with the required size. In these cases, it is preferable to optically bond two or more elements together with a technique such as Chemically Activated Direct Bonding (CADB{copyright}). CADB is an epoxy-free bonding method that produces bulk-strength bonded samples with negligible optical loss and excellent environmental robustness. The authors have demonstrated CADB for a variety of different laser glasses and crystals. For this project, they will bond quartz samples together to determine the suitability of the resulting assemblies for large aperture high power laser optics. The assemblies will be evaluated in terms of their transmitted wavefront error, and other optical properties.

  2. The Higgs transverse momentum distribution at NNLL and its theoretical errors

    SciTech Connect

    Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun

    2015-12-15

    In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p << mh we include the resummation of the large logs at next to next-to leading order and then match on to the α2s fixed order result near p~mh. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our central value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.

  3. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    SciTech Connect

    Barriere, M.T.; Luckas, W.J.; Wreathall, J.; Cooper, S.E.; Bley, D.C.; Ramey-Smith, A.

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  4. Multifield optimization intensity-modulated proton therapy (MFO-IMPT) for prostate cancer: Robustness analysis through simulation of rotational and translational alignment errors

    SciTech Connect

    Pugh, Thomas J.; Amos, Richard A.; John Baptiste, Sandra; Choi, Seungtaek; Nhu Nguyen, Quyhn; Ronald Zhu, X.; Palmer, Matthew B.; Lee, Andrew K.

    2013-10-01

    To evaluate the dosimetric consequences of rotational and translational alignment errors in patients receiving intensity-modulated proton therapy with multifield optimization (MFO-IMPT) for prostate cancer. Ten control patients with localized prostate cancer underwent treatment planning for MFO-IMPT. Rotational and translation errors were simulated along each of 3 axes: anterior-posterior (A-P), superior-inferior (S-I), and left-right. Clinical target-volume (CTV) coverage remained high with all alignment errors simulated. Rotational errors did not result in significant rectum or bladder dose perturbations. Translational errors resulted in larger dose perturbations to the bladder and rectum. Perturbations in rectum and bladder doses were minimal for rotational errors and larger for translational errors. Rectum V45 and V70 increased most with A-P misalignment, whereas bladder V45 and V70 changed most with S-I misalignment. The bladder and rectum V45 and V70 remained acceptable even with extreme alignment errors. Even with S-I and A-P translational errors of up to 5 mm, the dosimetric profile of MFO-IMPT remained favorable. MFO-IMPT for localized prostate cancer results in robust coverage of the CTV without clinically meaningful dose perturbations to normal tissue despite extreme rotational and translational alignment errors.

  5. T-609: Adobe Acrobat/Reader Memory Corruption Error in CoolType Library Lets Remote Users Execute Arbitrary Code

    Energy.gov [DOE]

    A remote user can create a specially crafted PDF file that, when loaded by the target user, will trigger a memory corruption error in the CoolType library and execute arbitrary code on the target system. The code will run with the privileges of the target user.

  6. ARM - Relative Humidity Calculations

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    CalculatorsRelative Humidity Calculations Outreach Home Room News Publications Traditional Knowledge Kiosks Barrow, Alaska Tropical Western Pacific Site Tours Contacts Students Study Hall About ARM Global Warming FAQ Just for Fun Meet our Friends Cool Sites Teachers Teachers' Toolbox Lesson Plans Relative Humidity Calculations Heat Index is an index that combines air temperature and relative humidity to estimate how hot it actually feels. The human body cools off through perspiration, which

  7. Related Links - Hanford Site

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Related Links Hanford Advisory Board Convening Report SSAB Guidance Memorandum of Understanding Membership Nomination and Appointment Process Operating Ground Rules Calendars Advice and Responses Full Board Meeting Information Committee Meeting Information Outgoing Board Correspondence Key Board Products and Special Reports HAB Annual Report HAB and Committee Lists Points of Contact Related Links Related Links Email Email Page | Print Print Page | Text Increase Font Size Decrease Font Size

  8. Community Relations Plan

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Community, Environment » Environmental Stewardship » Environmental Protection » Community Relations Plan Community Relations Plan Consultations, communications, agreements, and disagreements between the Permittees and the public are documented during the Hazardous Waste Facility Permit Community Relations Plan development. Contact Environmental Communication & Public Involvement PO Box 1663, MS M996 Los Alamos, NM 87544 (505) 667-0216 Email We welcome your comments and suggestions on how

  9. Verification and source-position error analysis of film reconstruction techniques used in the brachytherapy planning systems

    SciTech Connect

    Chang Liyun; Ho, Sheng-Yow; Chui, Chen-Shou; Du, Yi-Chun; Chen Tainsong

    2009-09-15

    A method was presented that employs standard linac QA tools to verify the accuracy of film reconstruction algorithms used in the brachytherapy planning system. Verification of reconstruction techniques is important as suggested in the ESTRO booklet 8: ''The institution should verify the full process of any reconstruction technique employed clinically.'' Error modeling was also performed to analyze seed-position errors. The ''isocentric beam checker'' device was used in this work. It has a two-dimensional array of steel balls embedded on its surface. The checker was placed on the simulator couch with its center ball coincident with the simulator isocenter, and one axis of its cross marks parallel to the axis of gantry rotation. The gantry of the simulator was rotated to make the checker behave like a three-dimensional array of balls. Three algorithms used in the ABACUS treatment planning system: orthogonal film, 2-films-with-variable-angle, and 3-films-with-variable-angle were tested. After exposing and digitizing the films, the position of each steel ball on the checker was reconstructed and compared to its true position, which can be accurately calculated. The results showed that the error is dependent on the object-isocenter distance, but not the magnification of the object. The averaged errors were less than 1 mm within the tolerance level defined by Roueet al. [''The EQUAL-ESTRO audit on geometric reconstruction techniques in brachytherapy,'' Radiother. Oncol. 78, 78-83 (2006)]. However, according to the error modeling, the theoretical error would be greater than 2 mm if the objects were located more than 20 cm away from the isocenter with a 0.5 deg. reading error of the gantry and collimator angles. Thus, in addition to carefully performing the QA of the gantry and collimator angle indicators, it is suggested that the patient, together with the applicators or seeds inside, should be placed close to the isocenter as much as possible. This method could be used

  10. Fermilab Today - Related Content

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Related Content Subscribe | Contact Fermilab Today | Archive | Classifieds Search GO Classifieds Director's Corner Physics in a Nutshell Frontier Science Result Tip of the Week...

  11. Cognitive decision errors and organization vulnerabilities in nuclear power plant safety management: Modeling using the TOGA meta-theory framework

    SciTech Connect

    Cappelli, M.; Gadomski, A. M.; Sepiellis, M.; Wronikowska, M. W.

    2012-07-01

    In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safety Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)

  12. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W. (Albuquerque, NM); Heard, Freddie E. (Albuquerque, NM); Cordaro, J. Thomas (Albuquerque, NM)

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  13. ARM - Related Links

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Related Links Related Links TWP-ICE Home Tropical Western Pacific Home ARM Data Discovery Browse Data Post-Experiment Data Sets Weather Summary (pdf, 6M) New York Workshop Presentations Experiment Planning TWP-ICE Proposal Abstract Detailed Experiment Description Science Plan (pdf, 1M) Operations Plan (pdf, 321K) Maps Contact Info Related Links Daily Report Report Archives Press Media Coverage TWP-ICE Fact Sheet (pdf, 211K) Press Releases TWP-ICE Images ARM flickr site <=""

  14. Community Relations Plan

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Community Relations Plan Community Relations Plan The Laboratory maintains an open working relationship with communities and interested members of the public. August 1, 2013 Guests listen to Lab historian Ellen McGhee on tour of historical sites Guests listen to Laboratory historian Ellen McGhee on a tour of historical sites. What the plan does Establishes a productive government-to-government relationship with local tribes and pueblos Keeps communities and interested members of the public

  15. An asymptotically exact, pointwise, a posteriori error estimator for the finite element method with super convergence properties

    SciTech Connect

    Hugger, J.

    1995-12-31

    When the finite element solution of a variational problem possesses certain super convergence properties, it is possible very inexpensively to obtain a correction term providing an additional order of approximation of the solution. The correction can be used for error estimation locally or globally in whatever norm is preferred, or if no error estimation is wanted it can be used for postprocessing of the solution to improve the quality. In this paper such a correction term is described for the general case of n dimensional, linear or nonlinear problems. Computational evidence of the performance in one space dimension is given with special attention to the effects of the appearance of singularities and zeros of derivatives in the exact solution.

  16. A global conformance quality model. A new strategic tool for minimizing defects caused by variation, error, and complexity

    SciTech Connect

    Hinckley, C.M.

    1994-01-01

    The performance of Japanese products in the marketplace points to the dominant role of quality in product competition. Our focus is motivated by the tremendous pressure to improve conformance quality by reducing defects to previously unimaginable limits in the range of 1 to 10 parts per million. Toward this end, we have developed a new model of conformance quality that addresses each of the three principle defect sources: (1) Variation, (2) Human Error, and (3) Complexity. Although the role of variation in conformance quality is well documented, errors occur so infrequently that their significance is not well known. We have shown that statistical methods are not useful in characterizing and controlling errors, the most common source of defects. Excessive complexity is also a root source of defects, since it increases errors and variation defects. A missing link in the defining a global model has been the lack of a sound correlation between complexity and defects. We have used Design for Assembly (DFA) methods to quantify assembly complexity and have shown that assembly times can be described in terms of the Pareto distribution in a clear exception to the Central Limit Theorem. Within individual companies we have found defects to be highly correlated with DFA measures of complexity in broad studies covering tens of millions of assembly operations. Applying the global concepts, we predicted that Motorola`s Six Sigma method would only reduce defects by roughly a factor of two rather than orders of magnitude, a prediction confirmed by Motorola`s data. We have also shown that the potential defects rates of product concepts can be compared in the earliest stages of development. The global Conformance Quality Model has demonstrated that the best strategy for improvement depends upon the quality control strengths and weaknesses.

  17. Estimation of organic carbon blank values and error structures of the speciation trends network data for source apportionment

    SciTech Connect

    Eugene Kim; Philip K. Hopke; Youjun Qin

    2005-08-01

    Because the particulate organic carbon (OC) concentrations reported in U.S. Environment Protection Agency Speciation Trends Network (STN) data were not blank corrected, the OC blank concentrations were estimated using the intercept in particulate matter {lt} 2.5 {mu}m in aerodynamic diameter (PM2.5) regression against OC concentrations. The estimated OC blank concentrations ranged from 1 to 2.4 {mu}g/m{sup 3} showing higher values in urban areas for the 13 monitoring sites in the northeastern United States. In the STN data, several different samplers and analyzers are used, and various instruments show different method detection limit (MDL) values, as well as errors. A comprehensive set of error structures that would be used for numerous source apportionment studies of STN data was estimated by comparing a limited set of measured concentrations and their associated uncertainties. To examine the estimated error structures and investigate the appropriate MDL values, PM2.5 samples collected at a STN site in Burlington, VT, were analyzed through the application of the positive matrix factorization. A total of 323 samples that were collected between December 2000 and December 2003 and 49 species based on several variable selection criteria were used, and eight sources were successfully identified in this study with the estimated error structures and min values among different MDL values from the five instruments: secondary sulfate aerosol (41%) identified as the result of emissions from coal-fired power plants, secondary nitrate aerosol (20%), airborne soil (15%), gasoline vehicle emissions (7%), diesel emissions (7%), aged sea salt (4%), copper smelting (3%), and ferrous smelting (2%). Time series plots of contributions from airborne soil indicate that the highly elevated impacts from this source were likely caused primarily by dust storms.

  18. Correction of localized shape errors on optical surfaces by altering the localized density of surface or near-surface layers

    DOEpatents

    Taylor, John S.; Folta, James A.; Montcalm, Claude

    2005-01-18

    Figure errors are corrected on optical or other precision surfaces by changing the local density of material in a zone at or near the surface. Optical surface height is correlated with the localized density of the material within the same region. A change in the height of the optical surface can then be caused by a change in the localized density of the material at or near the surface.

  19. SU-E-T-374: Sensitivity of ArcCHECK to Tomotherapy Delivery Errors: Dependence On Analysis Technique

    SciTech Connect

    Templeton, A; Chu, J; Turian, J

    2014-06-01

    Purpose: ArcCHECK (Sun Nuclear) is a cylindrical diode array detector allowing three-dimensional sampling of dose, particularly useful in treatment delivery QA of helical tomotherapy. Gamma passing rate is a common method of analyzing results from diode arrays, but is less intuitive in 3D with complex measured dose distributions. This study explores the sensitivity of gamma passing rate to choice of analysis technique in the context of its ability to detect errors introduced into the treatment delivery. Methods: Nine treatment plans were altered to introduce errors in: couch speed, gantry/sonogram synchronization, and leaf open time. Each plan was then delivered to ArcCHECK in each of the following arrangements: offset, when the high dose area of the plan is delivered to the side of the phantom so that some diode measurements will be on the order of the prescription dose, and centered, when the high dose is in the center of the phantom where an ion chamber measurement may be acquired, but the diode measurements are in the mid to low-dose region at the periphery of the plan. Gamma analysis was performed at 3%/3mm tolerance and both global and local gamma criteria. The threshold of detectability for each error type was calculated as the magnitude at which the gamma passing rate drops below 90%. Results: Global gamma criteria reduced the sensitivity in the offset arrangement (from 2.3% to 4.5%, 8 to 21, and 3ms to 8ms for couch-speed decrease, gantry-error, and leaf-opening increase, respectively). The centered arrangement detected changes at 3.3%, 5, and 4ms with smaller variation. Conclusion: Each arrangement has advantages; offsetting allows more sampling of the higher dose region, while centering allows an ion chamber measurement and potentially better use of tools such as 3DVH, at the cost of positioning more of the diodes in the sometimes noisy mid-dose region.

  20. Derivation and generalization of the dispersion relation of rising-sun magnetron with sectorial and rectangular cavities

    SciTech Connect

    Shi, Di-Fu; Qian, Bao-Liang; Wang, Hong-Gang; Li, Wei

    2013-12-15

    Field analysis method is used to derive the dispersion relation of rising-sun magnetron with sectorial and rectangular cavities. This dispersion relation is then extended to the general case in which the rising-sun magnetron can be with multi-group cavities of different shapes and sizes, and from which the dispersion relations of conventional magnetron, rising-sun magnetron, and magnetron-like device can be obtained directly. The results show that the relative errors between the theoretical and simulation values of the dispersion relation are less than 3%, the relative errors between the theoretical and simulation values of the cutoff frequencies of ? mode are less than 2%. In addition, the influences of each structure parameter of the magnetron on the cutoff frequency of ? mode and on the mode separation are investigated qualitatively and quantitatively, which may be of great interest to designing a frequency tuning magnetron.

  1. WIPP - Related FOIA Sites

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    FOIA Reading Room DOE Headquarters FOIA Program Department of Justice (DOJ) Office of Information and Privacy Records that have been released in response to multiple written requests for information under the FOIA or are likely to be requested again: Current Contracts Expired Contracts Other Documents Final opinions made in the adjudication of cases DOE Directives, Regulations, and Standards Other WIPP Related Documents

  2. From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring

    SciTech Connect

    Lombardi, Marcie L.

    2012-03-01

    Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at the Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF

  3. SNL Community Relations Plan

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    5 United States Department of Energy National Nuclear Security Administration Sandia Field Office Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2015-6349 O Sandia National Laboratories 2015 RCRA Facility Operating Permit Community Relations Plan i TABLE OF CONTENTS LIST

  4. PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS

    SciTech Connect

    Obreschkow, D.; Meyer, M.

    2013-11-10

    Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using any inclination measurementsthat is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.

  5. Error analyses on some typically approximate solutions of residual stress within a thin film on a substrate

    SciTech Connect

    Zhang, X.C.; Xu, B.S.; Wang, H.D.; Wu, Y.X.

    2005-09-01

    Stoney's equation and subsequent modifications and some approximations are widely used to evaluate the macrostress within a film on a substrate, though some of these solutions are only applicable for thin films. The purpose of this paper is to review the considerable efforts devoted to the analysis of residual stresses in a single-layer film in the last century and recent years and to estimate the errors involved in using these formulas. The following are some of the important results that can be obtained. (1) The exact solution for the residual stress can be expressed in terms of Stoney's equation [Proc. R. Soc. London A82, 172 (1909)] and a correction factor (1+{sigma}{eta}{sup 3})/(1+{eta}), where {sigma},{eta} are the ratios of the elastic modulus and the thickness of the film to those of the substrate, respectively. (2) When the thickness ratio of the film and the substrate is less than 0.1, Stoney's equation and Roell's approximation [J. Appl. Phys. 47, 3224 (1976)] do not cause serious errors. (3) The approximation proposed by Vilms and Kerps [J. Appl. Phys. 53, 1536 (1982)] is an improved modification for Stoney's equation and can be applicable when {eta}{<=}0.3. (4) The approximations proposed by Brenner and Senderoff [J. Res. Natl. Bur. Stand. 42, 105 (1949)] and Teixeira [Thin Solid Films 392, 276 (2001)] can lead to serious errors and should be avoided. (5) The approximation based on the assumption of constant elastic modulus is only applicable for a ratio of {eta}{<=}0.01 and can be very misleading.

  6. Labor Relations | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Relations Labor Relations The National Labor Relations Act prohibits unfair labor practices, including discrimination in employment to discourage (or encourage) membership in a union, and engaging in bad faith collective bargaining. National Labor Relations Act DOE Training Slides are available below: National Labor Relations Act (2.53 MB) National Labor Relations Act 101 (1.25 MB)

  7. Community Relations Plan Update

    Office of Legacy Management (LM)

    8-TAR MAC-MRAP 1.9.1 Monticello Mill Tailings Superfund Site and Monticello Vicinity Properties Superfund Site Monticello, Utah Community Relations Plan Update FY 2001 Prepared for U.S. Department of Energy Albuquerque Operations Office Grand Junction Office Prepared by MACTEC Environmental Restoration Services, LLC Grand Junction, Colorado Work performed under DOE Contract No. DE-AC13-96GJ87335 for the U.S. Department of Energy For more information or to request additional copies of this

  8. RelatedUIIs

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    strategyID strategyTitle decision Date RelatedUIIs ombInitiative useOfSavingsAvoidance netOr Gross amountType FY2012 Amount FY2013 Amount FY2014 Amount FY2015 Amount FY2016 Amount 2 Fossil Energy's (FE) Rocky Mountain Oilfield Test Center 11/01/2011 019-000000236 Other Per Congressional direction, RMOTC was decommissioned in FY2014 and the field site facility is closed. The Casper, Wyoming site (administrative office) reduced IT personnel by 2 FTEs as part of the disposition plan. DOE will

  9. Volumetric apparatus for hydrogen adsorption and diffusion measurements: Sources of systematic error and impact of their experimental resolutions

    SciTech Connect

    Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; DeltaE s.r.l., c/o Universit della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende , Italy and CNR-IPCF LiCryL, c/o Universit della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende

    2013-10-15

    The development of a volumetric apparatus (also known as a Sieverts apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

  10. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    SciTech Connect

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar; Marianno, Fernando J.; Shao, Xiaoyan; Zhang, Jie; Hodge, Bri-Mathias; Hamann, Hendrik F.

    2015-07-15

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.

  11. Suppression of fiber modal noise induced radial velocity errors for bright emission-line calibration sources

    SciTech Connect

    Mahadevan, Suvrath; Halverson, Samuel; Ramsey, Lawrence; Venditti, Nick

    2014-05-01

    Modal noise in optical fibers imposes limits on the signal-to-noise ratio (S/N) and velocity precision achievable with the next generation of astronomical spectrographs. This is an increasingly pressing problem for precision radial velocity spectrographs in the near-infrared (NIR) and optical that require both high stability of the observed line profiles and high S/N. Many of these spectrographs plan to use highly coherent emission-line calibration sources like laser frequency combs and Fabry-Perot etalons to achieve precision sufficient to detect terrestrial-mass planets. These high-precision calibration sources often use single-mode fibers or highly coherent sources. Coupling light from single-mode fibers to multi-mode fibers leads to only a very low number of modes being excited, thereby exacerbating the modal noise measured by the spectrograph. We present a commercial off-the-shelf solution that significantly mitigates modal noise at all optical and NIR wavelengths, and which can be applied to spectrograph calibration systems. Our solution uses an integrating sphere in conjunction with a diffuser that is moved rapidly using electrostrictive polymers, and is generally superior to most tested forms of mechanical fiber agitation. We demonstrate a high level of modal noise reduction with a narrow bandwidth 1550 nm laser. Our relatively inexpensive solution immediately enables spectrographs to take advantage of the innate precision of bright state-of-the art calibration sources by removing a major source of systematic noise.

  12. TU-C-BRE-05: Clinical Implications of AAA Commissioning Errors and Ability of Common Commissioning ' Credentialing Procedures to Detect Them

    SciTech Connect

    McVicker, A; Oldham, M; Yin, F; Adamson, J

    2014-06-15

    Purpose: To test the ability of the TG-119 commissioning process and RPC credentialing to detect errors in the commissioning process for a commercial Treatment Planning System (TPS). Methods: We introduced commissioning errors into the commissioning process for the Anisotropic Analytical Algorithm (AAA) within the Eclipse TPS. We included errors in Dosimetric Leaf Gap (DLG), electron contamination, flattening filter material, and beam profile measurement with an inappropriately large farmer chamber (simulated using sliding window smoothing of profiles). We then evaluated the clinical impact of these errors on clinical intensity modulated radiation therapy (IMRT) plans (head and neck, low and intermediate risk prostate, mesothelioma, and scalp) by looking at PTV D99, and mean and max OAR dose. Finally, for errors with substantial clinical impact we determined sensitivity of the RPC IMRT film analysis at the midpoint between PTV and OAR using a 4mm distance to agreement metric, and of a 7% TLD dose comparison. We also determined sensitivity of the 3 dose planes of the TG-119 C-shape IMRT phantom using gamma criteria of 3% 3mm. Results: The largest clinical impact came from large changes in the DLG with a change of 1mm resulting in up to a 5% change in the primary PTV D99. This resulted in a discrepancy in the RPC TLDs in the PTVs and OARs of 7.1% and 13.6% respectively, which would have resulted in detection. While use of incorrect flattening filter caused only subtle errors (<1%) in clinical plans, the effect was most pronounced for the RPC TLDs in the OARs (>6%). Conclusion: The AAA commissioning process within the Eclipse TPS is surprisingly robust to user error. When errors do occur, the RPC and TG-119 commissioning credentialing criteria are effective at detecting them; however OAR TLDs are the most sensitive despite the RPC currently excluding them from analysis.

  13. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    SciTech Connect

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; Debruyne, Dimitri; Van Houtte, Paul

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  14. Compartment modeling of dynamic brain PETThe impact of scatter corrections on parameter errors

    SciTech Connect

    Hggstrm, Ida Karlsson, Mikael; Larsson, Anne; Schmidtlein, C. Ross

    2014-11-01

    Purpose: The aim of this study was to investigate the effect of scatter and its correction on kinetic parameters in dynamic brain positron emission tomography (PET) tumor imaging. The 2-tissue compartment model was used, and two different reconstruction methods and two scatter correction (SC) schemes were investigated. Methods: The GATE Monte Carlo (MC) software was used to perform 2 15 full PET scan simulations of a voxelized head phantom with inserted tumor regions. The two sets of kinetic parameters of all tissues were chosen to represent the 2-tissue compartment model for the tracer 3?-deoxy-3?-({sup 18}F)fluorothymidine (FLT), and were denoted FLT{sub 1} and FLT{sub 2}. PET data were reconstructed with both 3D filtered back-projection with reprojection (3DRP) and 3D ordered-subset expectation maximization (OSEM). Images including true coincidences with attenuation correction (AC) and true+scattered coincidences with AC and with and without one of two applied SC schemes were reconstructed. Kinetic parameters were estimated by weighted nonlinear least squares fitting of image derived timeactivity curves. Calculated parameters were compared to the true input to the MC simulations. Results: The relative parameter biases for scatter-eliminated data were 15%, 16%, 4%, 30%, 9%, and 7% (FLT{sub 1}) and 13%, 6%, 1%, 46%, 12%, and 8% (FLT{sub 2}) for K{sub 1}, k{sub 2}, k{sub 3}, k{sub 4}, V{sub a}, and K{sub i}, respectively. As expected, SC was essential for most parameters since omitting it increased biases by 10 percentage points on average. SC was not found necessary for the estimation of K{sub i} and k{sub 3}, however. There was no significant difference in parameter biases between the two investigated SC schemes or from parameter biases from scatter-eliminated PET data. Furthermore, neither 3DRP nor OSEM yielded the smallest parameter biases consistently although there was a slight favor for 3DRP which produced less biased k{sub 3} and K{sub i} estimates while

  15. Related Links | Department of Energy

    Office of Environmental Management (EM)

    Information Resources Related Links Related Links These resources provide more information about wind energy research within the United States and abroad. Consumer and ...

  16. Documents Related to the ICP

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    & Solicitations > ICP Contract > Documents Related ICP Blue Line Free Acrobat Reader Link The documents listed below are related to the Idaho Cleanup Project (ICP) contract. ...

  17. Local hybrid functionals with orbital-free mixing functions and balanced elimination of self-interaction error

    SciTech Connect

    Silva, Piotr de E-mail: clemence.corminboeuf@epfl.ch; Corminboeuf, Clmence E-mail: clemence.corminboeuf@epfl.ch

    2015-02-21

    The recently introduced density overlap regions indicator (DORI) [P. de Silva and C. Corminboeuf, J. Chem. Theory Comput. 10(9), 37453756 (2014)] is a density-dependent scalar field revealing regions of high density overlap between shells, atoms, and molecules. In this work, we exploit its properties to construct local hybrid exchange-correlation functionals aiming at balanced reduction of the self-interaction error. We show that DORI can successfully replace the ratio of the von Weizscker and exact positive-definite kinetic energy densities, which is commonly used in mixing functions of local hybrids. Additionally, we introduce several semi-empirical parameters to control the local and global admixture of exact exchange. The most promising of our local hybrids clearly outperforms the underlying semi-local functionals as well as their global hybrids.

  18. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    SciTech Connect

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  19. Radiochemically-supported microbial communities. A potential mechanism for biocolloid production of importance to actinide transport

    SciTech Connect

    Moser, Duane P.; Hamilton-Brehm, Scott D.; Fisher, Jenny C.; Bruckner, James C.; Kruger, Brittany; Sackett, Joshua; Russell, Charles E.; Onstott, Tullis C.; Czerwinski, Ken; Zavarin, Mavrik; Campbell, James H.

    2015-03-20

    The work described here revealed the presence of diverse microbial communities located across 19 subsurface sites at the NNSS/NTTR and nearby locations. Overall, the diversity of microorganisms was high for subsurface habitats and variable between sites. As of this writing, preparations are being made to combine the Illumina sequences and 16S rRNA clone libraries with other non-NNSS/NTTR well sites of Southern Nevada Regional Flow System for a publication manuscript describing our very broad landscape scale survey of subsurface microbial diversity. Isolates DRI-13 and DRI-14 remain to be fully characterized and named in accordance with the conventions established by Bergey's Manual of Systematic Bacteriology. In preparation to be published, these microorganisms will be submitted to the American Type Culture Collection (ATCC) and the Deutsche Sammlung von Mikroorganismen und Zellkulturen GmbH (DSMZ).It is anticipated that the data resulting from this study in combination with other data sets that will allow us to produce a number of publications that will be impactful to the subsurface microbiology community.

  20. Chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of nuclear-grade plutonium nitrate solutions

    SciTech Connect

    Not Available

    1981-01-01

    These analytical procedures are designed to show whether a given material meets the purchaser's specifications as to plutonium content, effective fissile content, and impurity content. The following procedures are described in detail: plutonium by controlled-potential coulometry; plutonium by amperometric titration with iron(II); free acid by titration in an oxalate solution; free acid by iodate precipitation-potentiometric titration method; uranium by Arsenazo I spectrophotometric method; thorium by thorin spectrophotometric method; iron by 1,10-phenanthroline spectrophotometric method; chloride by thiocyanate spectrophotometric method; fluoride by distillation-spectrophotometric method; sulfate by barium sulfate turbidimetric method; isotopic composition by mass spectrometry; americium-241 by extraction and gamma counting; americium-241 by gamma counting; gamma-emitting fission products, uranium, and thorium by gamma-ray spectroscopy; rare earths by copper spark spectrochemical method; tungsten, niobium (columbium), and tantalum by spectrochemical method; simple preparation by spectrographic analysis for general impurities. (JMT)

  1. Atom-at-a-time radiochemical separations of the heaviest elements: Lawrencium chemistry

    SciTech Connect

    Hoffman, D.C.; Henderson, R.A.; Gregorich, K.E.; Bennett, D.A.; Chasteler, R.M.; Gannett, C.M.; Hall, H.L.; Lee, D.M.; Nurmia, M.J.; Silva, R.J.

    1987-04-01

    The isotope /sup 260/Lr produced in reactions of /sup 18/O with /sup 249/Bk was used to perform chemical experiments on lawrencium to learn more about its chemical properties. These experiments involved extractions with thenoyl trifluoroacetate (TTA), ammonium alpha-hydroxyisobutyrate (HIB) elution from a cation exchange resin column, and reverse-phase chromatography using hydrogen di(2-ethylhexyl)orthophosphoric acid (HDEHP) to investigate the chemical properties of Lr. The results from the HIB elutions also give information about the ionic radius of Lr(III) which was found to elute very close to Er. An attempt to reduce Lr(III) was also made.

  2. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  3. Tuning the narrow-band beam position monitor sampling clock to remove the aliasing errors in APS storage ring orbit measurements.

    SciTech Connect

    Sun, X.; Singh, O. )

    2007-01-01

    The Advanced Photon Source storage ring employs a real-time orbit correction system to reduce orbit motion up to 50 Hz. This system uses up to 142 narrow-band rf beam position monitors (Nbbpms) in a correction algorithm by sampling at a frequency of 1.53 kHz. Several Nbbpms exhibit aliasing errors in orbit measurements, rendering these Nbbpms unusable in real-time orbit feedback. The aliasing errors are caused by beating effects of the internal sampling clocks with various other processing clocks residing within the BPM electronics. A programmable external clock has been employed to move the aliasing errors out of the active frequency band of the real-time feedback system (RTFB) and rms beam motion calculation. This paper discusses the process of tuning and provides test results.

  4. Standardized Software for Wind Load Forecast Error Analyses and Predictions Based on Wavelet-ARIMA Models - Applications at Multiple Geographically Distributed Wind Farms

    SciTech Connect

    Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.

    2013-03-19

    Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (stochastic) model with the weather forecast model (deterministic) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.

  5. Documents Related to the ICP

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    ICP-Core Contract > Documents Related ICP-Core Contract Blue Line Free Acrobat Reader Link The documents listed below are related to the Idaho Cleanup Project (ICP) Core contract. ...

  6. Effect of Body Mass Index on Magnitude of Setup Errors in Patients Treated With Adjuvant Radiotherapy for Endometrial Cancer With Daily Image Guidance

    SciTech Connect

    Lin, Lilie L.; Hertan, Lauren; Rengan, Ramesh; Teo, Boon-Keng Kevin

    2012-06-01

    Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed. To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.

  7. SU-E-P-13: Quantifying the Geometric Error Due to Irregular Motion in Four-Dimensional Computed Tomography (4DCT)

    SciTech Connect

    Sawant, A

    2015-06-15

    Purpose: Respiratory correlated 4DCT images are generated under the assumption of a regular breathing cycle. This study evaluates the error in 4DCT-based target position estimation in the presence of irregular respiratory motion. Methods: A custom-made programmable externally-and internally-deformable lung motion phantom was placed inside the CT bore. An abdominal pressure belt was placed around the phantom to mimic clinical 4DCT acquisitio and the motion platform was programmed with a sinusoidal (±10mm, 10 cycles per minute) motion trace and 7 motion traces recorded from lung cancer patients. The same setup and motion trajectories were repeated in the linac room and kV fluoroscopic images were acquired using the on-board imager. Positions of 4 internal markers segmented from the 4DCT volumes were overlaid upon the motion trajectories derived from the fluoroscopic time series to calculate the difference between estimated (4DCT) and “actual” (kV fluoro) positions. Results: With a sinusoidal trace, absolute errors of the 4DCT estimated markers positions vary between 0.78mm and 5.4mm and RMS errors are between 0.38mm to 1.7mm. With irregular patient traces, absolute errors of the 4DCT estimated markers positions increased significantly by 100 to 200 percent, while the corresponding RMS error values have much smaller changes. Significant mismatches were frequently found at peak-inhale or peak-exhale phase. Conclusion: As expected, under conditions of well-behaved, periodic sinusoidal motion, the 4DCT yielded much better estimation of marker positions. When an actual patient trace is used 4DCT-derived positions showed significant mismatches with the fluoroscopic trajectories, indicating the potential for geometric and therefore dosimetric errors in the presence of cycle-to-cycle respiratory variations.

  8. Entropic uncertainty relations and entanglement

    SciTech Connect

    Guehne, Otfried; Lewenstein, Maciej

    2004-08-01

    We discuss the relationship between entropic uncertainty relations and entanglement. We present two methods for deriving separability criteria in terms of entropic uncertainty relations. In particular, we show how any entropic uncertainty relation on one part of the system results in a separability condition on the composite system. We investigate the resulting criteria using the Tsallis entropy for two and three qubits.

  9. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    SciTech Connect

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  10. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGES [OSTI]

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  11. Direct numerical simulations in solid mechanics for quantifying the macroscale effects of microstructure and material model-form error

    DOE PAGES [OSTI]

    Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; Littlewood, David J.; Baines, Andrew J.

    2016-03-16

    Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less

  12. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  13. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Genevive; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  14. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  15. Related Links | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Related Links Related Links Private, public, and nonprofit organizations around the country offer a wide range of courses and other services to help you either improve your current skills or learn new ones. The sites featured here can help you find courses of specific interest as well as other information about training requirements for certain energy jobs. DOE Related Advanced Manufacturing Office: Training Find training sessions in your area and learn how to save energy in your manufacturing

  16. Documents Related to the ICP

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    DE-EM0003976 You are here: DOE-ID Home > Contracts, Financial Assistance & Solicitations > STI Contract > Documents Related STI Blue Line Free Acrobat Reader Link The documents ...

  17. Documents Related to the INL

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Financial Assistance & Solicitations > INL Contract > Documents Related INL Blue Line Free Acrobat Reader Link The documents listed below represent an electronic copy of ...

  18. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  19. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    SciTech Connect

    Shirasaki, Masato; Yoshida, Naoki

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ?1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of ?w {sub 0} ? 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density ?{sub m0}=0.256{sub 0.046}{sup 0.054}.

  20. TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...

    Office of Scientific and Technical Information (OSTI)

    sup Alburger D E Warburton E K PHYSICS BRANCHING RATIO CARBON CARBON DECAY DEUTERON BEAMS ELECTRIC CHARGES ENERGY ENERGY LEVELS ERRORS LIFETIME MAGNETIC FIELDS MAGNETIC MOMENTS...

  1. LTS Related Links - Hanford Site

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Related Links About Us LTS Home Page LTS Project Management LTS Transition and Timeline LTS Execution LTS Background LTS Information Management LTS Fact Sheets / Briefings LTS In The News LTS Related Links LTS Contact Us LTS Related Links Email Email Page | Print Print Page | Text Increase Font Size Decrease Font Size Hanford Site Cleanup Completion Framework (DOE/RL 2009-10) (PDF) Hanford Long-Term Stewardship Program Plan (DOE/RL 2010-35) (PDF) DOE-EM LTS Site Legacy Management CERCLA 5 Year

  2. NEPA-Related Public Involvement

    Energy.gov [DOE]

    The Loan Programs Office’s NEPA-related hearings, public meetings, and public notices (e.g. public scoping meeting, public hearing, notice of proposed floodplain or wetland action) are presented...

  3. NREL: Energy Analysis - Related Links

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Related Links Here you'll find links to other programs, organizations, and information resources concerning other analysis capabilities, energy-modeling, and technology expertise related to renewable energy. International Applications NREL's International Program in its effort to promote the use of renewable energy as a tool for sustainable development, applies world-class expertise in technology development and deployment, economic analysis, resource assessment, project design and

  4. Communications and Media Relations Group

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Communications and Media Relations Group Public Affairs Communications Community Public Affairs Org Chart Education Creative Services ⇒ Navigate Section Public Affairs Communications Community Public Affairs Org Chart Education Creative Services Berkeley Lab's Communications and Media Relations Group is responsible for gathering, reporting, and disseminating news about the Lab to both internal and external audiences, including employees, the media, and the community. The latest news can be

  5. Related Opportunities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Funding Opportunities » Related Opportunities Related Opportunities A variety of federal funding sources are available that may be applicable to SSL. For example, DOE's Office of Science provides basic research grants through its annual solicitation process, and supports fundamental, longer-term energy research through Energy Frontier Research Centers. Both DOE and the National Science Foundation fund Small Business Innovation Research grants to foster increased participation of small

  6. SU-E-T-450: Dosimetric Impact of Rotational Error On Multiple-Target Intensity-Modulated Radiosurgery (IMRS) with Single-Isocenter

    SciTech Connect

    Jang, S; Huq, M

    2014-06-01

    Purpose: Evaluating the dosimetric-impact on multiple-targets placed away from the isocenter-target with varying rotational-error introduced by initial setup uncertainty and/or intrafractional-movement Methods: CyberKnife-Phantom was scanned with the Intracranial SRS-protocol of 1.25mm slice-thickness and the multiple-targets(GTV) of 1mm and 10mm in diameter were contoured on the Eclipse. PTV for distal-target only was drawn with 1mm expansion around the GTV to find out how much margin is needed to compensate for the rotational-error. The separation between the isocenter-target and distal-target was varied from 3cm to 7cm. RapidArc-based IMRS plans of 16Gy single-fraction were generated with five non-coplanar arcs by using Varian TrueBeam-STx equipped with high resolution MLC leaves of 2.5mm at center and with dose-rate of 1400MU/min at 6MV for flatteringfilter- free(FFF). An identical CT image with intentionally introduced 1° rotational-error was registered with the planning CT image, and the isodose distribution and Dose-Volume-Histogram(DVH) were compared with the original plans. Additionally, the dosimetric-impact of rotational error was evaluated with that of 6X photon energy which was generated with the same target-coverage. Results: For the 1mm-target with 6X-FFF, PTV-coverage(D100) of the distal-target with 1° rotational-error decreased from 1.00 to 0.35 as the separation between isocenter-target and distal-target increased from 3cm to 7cm. However, GTV-coverage(D100) was 1.0 except that of 7cm-separation(0.55), which resulted from the 1mm-margin around the distal-target. For 6X photon, GTV-coverage remained at 1.0 regardless of the separation of targets, showing that the dosimetric-impact of rotational error depends on the degree of rotational-error, separation of targets, and dose distribution around targets. For 10mm-target, PTV-coverage of distaltarget located 3cm-away was better than that of 1mm-target(0.93 versus 0.7) and GTV-coverage was 1

  7. SU-E-T-442: Sensitivity of Quality Assurance Tools to Delivery Errors On a Magnetic Resonance-Imaging Guided Radiation Therapy (MR-IGRT) System

    SciTech Connect

    Rodriguez, V; Li, H; Yang, D; Kashani, R; Wooten, H; Mutic, S; Green, O; Dempsey, J

    2014-06-01

    Purpose: To test the sensitivity of the quality assurance (QA) tools actively used on a clinical MR-IGRT system for potential delivery errors. Methods: Patient-specific QA procedures have been implemented for a commercially available Cobalt-60 MR-IGRT system. The QA tools utilized were a MR-compatible cylindrical diode-array detector (ArcCHECK) with a custom insert which positions an ionization chamber (Exradin A18) in the middle of the device, as well as an in-house treatment delivery verification program. These tools were tested to investigate their sensitivity to delivery errors. For the ArcCHECK and ion chamber, a baseline was established with a static field irradiation to a known dose. Variations of the baseline were investigated which included rotated gantry, altered field size, directional shifts, and different delivery time. In addition, similar variations were tested with the automated delivery verification program that compared the treatment parameters in the machine delivery logs to the ones in the plan. To test the software, a 3-field conformal plan was generated as the baseline. Results: ArcCHECK noted at least a 13% decrease in passing rate from baseline in the following scenarios: gantry rotation of 1 degree from plan, 5mm change in field size, 2mm lateral shift, and delivery time decrease. Ion chamber measurements remained consistent for these variations except for the 5 second decrease in delivery time scenario which resulted in an 8% difference from baseline. The delivery verification software was able to detect and report the simulated errors such as when the gantry was rotated by 0.6 degrees, the beam weighting was changed by a percent, a single multileaf collimator was moved by 1cm, and the dose was changed from 2 to 1.8Gy. Conclusion: The results show that the current tools used for patient specific QA are capable of detecting small errors in RT delivery with presence of magnetic field.

  8. SU-E-J-103: Setup Errors Analysis by Cone-Beam CT (CBCT)-Based Imaged-Guided Intensity Modulated Radiotherapy for Esophageal Cancer

    SciTech Connect

    Yang, H; Wang, W; Hu, W; Chen, X; Wang, X; Yu, C

    2014-06-01

    Purpose: To quantify setup errors by pretreatment kilovolt cone-beam computed tomography(KV-CBCT) scans for middle or distal esophageal carcinoma patients. Methods: Fifty-two consecutive middle or distal esophageal carcinoma patients who underwent IMRT were included this study. A planning CT scan using a big-bore CT simulator was performed in the treatment position and was used as the reference scan for image registration with CBCT. CBCT scans(On-Board Imaging v1. 5 system, Varian Medical Systems) were acquired daily during the first treatment week. A total of 260 CBCT scans was assessed with a registration clip box defined around the PTV-thorax in the reference scan based on(nine CBCTs per patient) bony anatomy using Offline Review software v10.0(Varian Medical Systems). The anterior-posterior(AP), left-right(LR), superiorinferior( SI) corrections were recorded. The systematic and random errors were calculated. The CTV-to-PTV margins in each CBCT frequency was based on the Van Herk formula (2.5Σ+0.7σ). Results: The SD of systematic error (Σ) was 2.0mm, 2.3mm, 3.8mm in the AP, LR and SI directions, respectively. The average random error (σ) was 1.6mm, 2.4mm, 4.1mm in the AP, LR and SI directions, respectively. The CTV-to-PTV safety margin was 6.1mm, 7.5mm, 12.3mm in the AP, LR and SI directions based on van Herk formula. Conclusion: Our data recommend the use of 6 mm, 8mm, and 12 mm for esophageal carcinoma patient setup in AP, LR, SI directions, respectively.

  9. Extrinsic Sources of Scatter in the Richness-Mass Relation of Galaxy Clusters

    SciTech Connect

    Rozo, Eduardo; Rykoff, Eli; Koester, Benjamin; Nord, Brian; Wu, Hao-Yi; Evrard, August; Wechsler, Risa; /KIPAC, Menlo Park /Stanford U., Phys. Dept.

    2012-03-27

    Maximizing the utility of upcoming photometric cluster surveys requires a thorough understanding of the richness-mass relation of galaxy clusters. We use Monte Carlo simulations to study the impact of various sources of observational scatter on this relation. Cluster ellipticity, photometric errors, photometric redshift errors, and cluster-to-cluster variations in the properties of red-sequence galaxies contribute negligible noise. Miscentering, however, can be important, and likely contributes to the scatter in the richness - mass relation of galaxy maxBCG clusters at the low mass end, where centering is more difficult. We also investigate the impact of projection effects under several empirically motivated assumptions about cluster environments. Using SDSS data and the maxBCG cluster catalog, we demonstrate that variations in cluster environments can rarely ({approx} 1%-5% of the time) result in significant richness boosts. Due to the steepness of the mass/richness function, the corresponding fraction of optically selected clusters that suffer from these projection effects is {approx} 5%-15%. We expect these numbers to be generic in magnitude, but a precise determination requires detailed, survey-specific modeling.

  10. Energy-related manpower, 1985

    SciTech Connect

    Not Available

    1986-01-01

    This report provides information about current and potential employment requirements and the relative adequacy of labor supplies for energy R and D and commercial energy activities, with special attention to scientific and engineering personnel. Since the oil embargo of 1973, major domestic and international changes have occurred in economies, political relationships, and energy production, markets, and prices. These changes, with concurrent modification in federal policy emphasis and programs, have altered energy production, conservation, and R and D activities sufficiently to affect employment requirements and educational needs. This is the fourth annual energy-related manpower report. It provides basic information for both public and private policymakers, educators, legislators, program managers, and others concerned with the labor market for scientists and engineers. It also provides information about future job opportunities for those interested in energy-related careers.

  11. Related Articles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Related Articles Related Articles This page provides links to recent articles describing the latest developments in the area of solid-state lighting. OCTOBER 2016 Tuning the Light in a Senior-Care Facility James Brodrick, U.S. Department of Energy LD+A Magazine http://www.ies.org/lda/members_contact.cfm AUGUST 2016 LED Watch: Specifying Color-Tunable LED Luminaires James Brodrick, U.S. Department of Energy LD+A Magazine http://www.ies.org/lda/members_contact.cfm Human Perceptions of Color

  12. Cyclotron Institute » Related Websites

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Related Websites Texas A&M University College of Science Department of Chemistry Department of Physics and Astronomy Department of Nuclear Engineering Cyclotron Facility Upgrade Nuclear Solutions Institute Faculty Websites Radiation Effects Facility Saturday Morning Physics Science with Beams of Radioactive Isotopes (at Pacifichem 2015) International Symposium on Super Heavy Nuclei (SHE 2015) Carpathian Summer School of Physics (2014) World Consensus Initiative

  13. SU-E-T-132: Dosimetric Impact of Positioning Errors in Hypo-Fractionated Cranial Radiation Therapy Using Frameless Stereotactic BrainLAB System

    SciTech Connect

    Keeling, V; Jin, H; Ali, I; Ahmad, S

    2014-06-01

    Purpose: To determine dosimetric impact of positioning errors in the stereotactic hypo-fractionated treatment of intracranial lesions using 3Dtransaltional and 3D-rotational corrections (6D) frameless BrainLAB ExacTrac X-Ray system. Methods: 20 cranial lesions, treated in 3 or 5 fractions, were selected. An infrared (IR) optical positioning system was employed for initial patient setup followed by stereoscopic kV X-ray radiographs for position verification. 6D-translational and rotational shifts were determined to correct patient position. If these shifts were above tolerance (0.7 mm translational and 1° rotational), corrections were applied and another set of X-rays was taken to verify patient position. Dosimetric impact (D95, Dmin, Dmax, and Dmean of planning target volume (PTV) compared to original plans) of positioning errors for initial IR setup (XC: Xray Correction) and post-correction (XV: X-ray Verification) was determined in a treatment planning system using a method proposed by Yue et al. (Med. Phys. 33, 21-31 (2006)) with 3D-translational errors only and 6D-translational and rotational errors. Results: Absolute mean translational errors (±standard deviation) for total 92 fractions (XC/XV) were 0.79±0.88/0.19±0.15 mm (lateral), 1.66±1.71/0.18 ±0.16 mm (longitudinal), 1.95±1.18/0.15±0.14 mm (vertical) and rotational errors were 0.61±0.47/0.17±0.15° (pitch), 0.55±0.49/0.16±0.24° (roll), and 0.68±0.73/0.16±0.15° (yaw). The average changes (loss of coverage) in D95, Dmin, Dmax, and Dmean were 4.5±7.3/0.1±0.2%, 17.8±22.5/1.1±2.5%, 0.4±1.4/0.1±0.3%, and 0.9±1.7/0.0±0.1% using 6Dshifts and 3.1±5.5/0.0±0.1%, 14.2±20.3/0.8±1.7%, 0.0±1.2/0.1±0.3%, and 0.7±1.4/0.0±0.1% using 3D-translational shifts only. The setup corrections (XC-XV) improved the PTV coverage by 4.4±7.3% (D95) and 16.7±23.5% (Dmin) using 6D adjustment. Strong correlations were observed between translation errors and deviations in dose coverage for XC. Conclusion

  14. Paducah Community Relations Plan | Department of Energy

    Energy Saver

    Community Relations Plan Paducah Community Relations Plan The Paducah Community Relations Plan is a primary document of the FFA that directs the comprehensive remediation of the ...

  15. Radiation Exposure Monitoring Systems - Other Related Sites ...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Radiation Exposure Monitoring Systems - Other Related Sites Radiation Exposure Monitoring Systems - Other Related Sites Other Related Sites DOE - Main Home Page - the home page for ...

  16. JLF Facility-related Publications

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    facilityrelated publications JLF Facility-related Publications Europa Janus Titan Title Author Source Date 1.1J, 120fs laser system based on Nd;glass-pumped Ti:sapphire A. Sullivan, J. Bonlie, D.F. Price, and W.E. White Optics Lett. 21, 603 1996 Chirped-pulse amplification with flashlamp-pumped Ti:sapphire amplifiers J.D. Bonlie, W.E. White, D.F. Price, and D.H. Reitze SPIE Proceedings 2116, 315 1994 Title Author Source Date New optical diagnostics for equation of state experiments on the Janus

  17. Epistemology and Rosen's Modeling Relation

    SciTech Connect

    Dress, W.B.

    1999-11-07

    Rosen's modeling relation is embedded in Popper's three worlds to provide an heuristic tool for model building and a guide for thinking about complex systems. The utility of this construct is demonstrated by suggesting a solution to the problem of pseudo science and a resolution of the famous Bohr-Einstein debates. A theory of bizarre systems is presented by an analogy with entangled particles of quantum mechanics. This theory underscores the poverty of present-day computational systems (e.g., computers) for creating complex and bizarre entities by distinguishing between mechanism and organism.

  18. Method and apparatus for analyzing error conditions in a massively parallel computer system by identifying anomalous nodes within a communicator set

    DOEpatents

    Gooding, Thomas Michael

    2011-04-19

    An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.

  19. Statistically significant relational data mining :

    SciTech Connect

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  20. Experimental Tests of Special Relativity

    ScienceCinema

    Roberts, Tom [Illinois Institute of Technology, Chicago, Illinois, United States

    2016-07-12

    Over the past century Special Relativity has become a cornerstone of modern physics, and its Lorentz invariance is a foundation of every current fundamental theory of physics. So it is crucial that it be thoroughly tested experimentally. The many tests of SR will be discussed, including several modern high-precision measurements. Several experiments that appear to be in conflict with SR will also be discussed, such as claims that the famous measurements of Michelson and Morley actually have a non-null result, and the similar but far more extensive measurements of Dayton Miller that 'determined the absolute motion of the earth'. But the errorbars for these old experiments are huge, and are larger than their purported signals. In short, SR has been tested extremely well and stands un-refuted today, but current thoughts about quantum gravity suggest that it might not truly be a symmetry of nature.